I was taught to have slightly more hardener to ensure full cure if mixing small batches.Small batches are easier to be off ratio.With large mixes you should not have ratio problems.
Larry
I was taught to have slightly more hardener to ensure full cure if mixing small batches.Small batches are easier to be off ratio.With large mixes you should not have ratio problems.
Larry
L GALILEO THE EPOXY SURFACE PLATE IS FLAT
Hi Cameron
Nice work. So, now that we are starting to get real data again, a real experimentalist would do what is called a "factorial analysis.". It is based on the work of Box, Hunter and (I think Hunter ?).
The basic idea is that you run an experiment similar to the idea of a body centered cubic crystal. (corners and center of the box) for 3 variables. All results lie within this "box" of collected data, and a little math can extract it out. This method works remarkably well for many variables / dimensions, although sometimes I don't understand how.
Any good materials guy uses it. :stickpoke
In any event, when I had to do it in 1982, the math was all performed manually, and was a PITA (about of week of analysis for 1 day of testing). I think it is now built into mathmatica and similar pc programs and is much easier to use.
Why use it ? - because it takes a lot fewer samples to get the same quality of results - often a factor of 5 - 10 less. It does require you to let the math tell you what your samples should be, not the other way around.
Hi, just to add to the post cure question - 100 C is not going to make it for an epoxy that calls for 120 C cure. Time does not always make up for a lack of activation energy, but even if it did, a good approximation is that for every 3 C lower, you need to double the cure time. This would make it about 128 hrs at 100 C if I did the math right.
harryn,
Good Call and nice method description.
For the record, I'm still in the process of testing equipment and test procedures so I'm not focused yet. The data I'm producing is still almost incidental. I've already learned I need to get a mixer instead of my sticks and a balance big enough to measure all the components of my test batches simultaneously. Almost mixed epoxy that is several percent off the desired ratio is not going to cut it with these kinds of sample sizes.
It's amazing how much work it is to produce and test samples. And if anyone thinks watching paint dry is fun, try babysitting an industrial oven for two hours on the off chance it might try to burn down the building. It might have tried had I not heard buzzing and realized the off switch was not working on my way out the door
Thanks for the suggestion about three factor testing. I haven't though about it yet as I am still trying to isolate and prevent boneheaded mistakes.
Regards all,
Cameron
My first read through prompted several thoughts.
Firstly, the structural chemistry certainly bears out observations of increase in strength with slight excess of hardener.
Secondly, this work, though mentioning composites in passing, is testing variables within a bulk material. Even in simplistic terms, one could guess that a degree of elasticity is far better than having a brittle matrix.
Thirdly, the mention and photos of the fracture structure with the 100 micron size shown suggests the inclusion of aggregates, especially including ones down to and below this size is going to have a major impact on the behaviour, if the fracture structure is such a good indicator of tensile strength.
This is pure supposition on my part, but there you go.
Finally, there is a mention several times of water absorption having an effect, but unfortunately no hint of the order of magnitude it has. So we don't know if other effects are greater or less than the one discussed here.
In fact what really worries me is the number of extra dimensions that are being introduced to the problem of maximizing the strength of our EG formulation.
Somewhere way back there was a brief discussion justifying the development of the thread along the lines that, by following the experimental method that de Larrard produced for high strength concrete, we could achieve a greater strength than shown previously by casual mixes made along the lines of traditional concrete formulations.
I firmly believe that this will be borne out by the current testing of samples.
However, unless we can get a handle on what order of magnitude a small adjustment in the hardener ratio will make to the strength, we may all drown in an ever deepening sea of variables.
It's like doing jigsaw puzzles in the dark.
Enjoy today's problems, for tomorrow's may be worse.
I have performed the D790ish tests on the 37-127/37-606 mixture at about 65 degress F.
The flexural modulus at .035 deflection is 366ksi plus or minus 25 ksi. n=10
The deflection at failure was around 1.05 inches for pure epoxy when I was able to induce failure whereas for Jack's E/G samples the maximum deflection was around .05 inches at failure. Only 2 of 10 samples failed. I had to stop the test machine manually on the remainder to avoid the sample and test head bottoming out on the test fixture. 10% is a large deflection in a D-790 test. These samples had 400%.
I have not corrected the following data for deflection (which may be impossible due to there being several hundred percent deflection) but uncorrected, the epoxy had ultimate flexural strength of at least 12.2 ksi plus or minus 0.6 ksi. n=2 as only 2 samples went to failure and these were way outside of normal D-790 conditions.
In summary, the epoxy we have is very rubbery and tough after a cure at 100C. It is this rubberyness combined with the rigidity of the aggregate that make the E/G a good damping material. We still do not know however how damping is effected by aggregate percentages.
Also, it looks like harryn is right that I bungled my cure as it should have been at 121C according to the data sheet.
In conclusion, the epoxy holds it together and the aggregate makes it rigid.
Regards all,
Cameron
Zumba,
Thanks for the tip. I just looked at Hexion's numbers again. We're only off by a factor of a bit more than 2 on the modulus numbers. There's some effect on the aggregate packing from the size of the molds Jack's samples were poured in but in general, from the packing models, our aggregate mix is near optimal for that grading span.
My current round of pure epoxy samples were poured into a larger mould than Jack's and then sawn to size with a diamond saw which may turn out to be a better procedure for future samples.
The hexion mineral casting epoxy is a bit more active than the reichhold stuff we've got as it has 30g/equivalent lower EEW. I'm thinking that this means more cross linkage and thus higher modulus.
Because of all the aggregate modeling that we have done, I hypothesize right now that the aggregate is about as good as can be achieved and that our modulus problems lie in technique, epoxy and additives probably in that order.
Regards all,
Cameron
Cameron, those are some interesting numbers in post #3002.
Anocast's PDF shows that their material has a flexural strength of 2500 psi (much weaker than jhudler's sample) but a young's modulus of 5.25 x 10^6 ksi (much more rigid). Hexion numbers are very similar.
Having slightly less than ideal aggregate packing could account for a lot of this added strength and elasticity, since that would mean there's more epoxy between the rocks and epoxy is quite elastic by nature.
I'm curious as to how different resins and hardeners compare. What kind of resin is used for laying carbon fiber?
Apparently Northrop Grumman and the US Air Force doesn't have that answer.
[http://query.nytimes.com/gst/fullpag...5BC0A961958260
Larry
L GALILEO THE EPOXY SURFACE PLATE IS FLAT
To really have fun however, you need Northrup working with the Army, not the Air Force. If genius can be described as poetry in motion then Army plus Northrup equals poseurs in motion.
That being said, it was observed by Zumba that our current epoxy, due to the huge deflection it will support, is stronger than the hexion stuff but less rigid. This begs the question of whether there is some other hardener or mixture of hardeners and additives that will produce closer to the desired properties. My assumption based on the Brazilian paper and greybeard's comments is that faster hardeners probably produce stiffer epoxy. It also leads me to start wondering about the cobalt acetyl acetonate again.
The formula we have right now with the high compliance is probably quite good at damping because of the seemingly viscoelastic properties of the epoxy. It's possible that a base of this stuff with more solid components in the actual mechanical portion would be advantageous.
Roach of course pointed out that the fact that our epoxy is similar to their secret formula might not be good enough.
So ladies and gentlemen, get out your thinking caps and lets figure out whether we need to change resins, hardeners, additives, or all three and what should they be? This is not a rhetorical question: longtime readers, newbies, and anybody else with an opinion is encouraged to put forward his/her theories.
I'm going to write to the reichhold apps engineer and send him the hexion datasheet.
Regards all,
Cameron
I started using epoxy when I was in elementary school, building model airplanes. Got the stuff in little 4-8 oz bottles from the hobby shop. The 5-minute epoxy definitely cured harder than the 20-30 minute stuff.
I wonder if BYK has some answers. They make additives specifically for epoxy mineral castings... surely they'll know how to obtain a stiffer (and more brittle) product.
It's like doing jigsaw puzzles in the dark.
Enjoy today's problems, for tomorrow's may be worse.
Epoxitech makes an epoxy resin formulated for marble repair, please refer to the following attachment: http://www.cnczone.com/forums/attach...0&d=1180708464
Epoxitech is also known as East system epoxy. The product has a 45-50 minute pot life and cures at room temperature.
Best regards
Bruno
It looks like there is alot of experimentation going on here. Can anyone recomend a working formula for Epoxy granite with sources for the materials. With 250 almost posts on this subject it's hard to weed out the facts from the experiments. Thanks!
What epoxy works best? US composites seems to be cheap. What products work best? Slow or fast hardeners, Thick or thin resin?
What mix? Can I get everything at my local homecenter, or do I need to special order?
Sorry this may be redundant, but there's alot of posts on this subject.
hmm, must not have hit post earlier, so here is a redo.
harryn, an extension to your factorial analysis for data analysis is "Design of Experiments" or DoE http://en.wikipedia.org/wiki/Design_of_experiments (saddly, the wikipedia article is of a lesser quality than most and does not do the process full justice) DoE is a tool to help plan experiments with multiple interconnected variables without doing a full sweep of every possible combination. DoE uses statistics to tease out influencing coeficients between variables and can evaluate how each variable affects other variables, even when multiple things are changed with each sample. As mentioned earlier, the math is a bear, but programs like minitab are realy the only way to tackle this.
Unlike factorial analysis, which is primairly an analysis tool, DoE is used to determine which variables are changed on each sample and statistically reduces the amount of samples needed. The real magic is that DoE can take the influencing coeficients it generates uses them to spit out an optimized formula automatically.
Aside from academic use, I have used this to sort out a year long build/tweak/redesign cycle on a new spring clutch design. The design was broken into 13 variables and I believe we ran 50 or so samples (it may have been a little more, but I'm not sure, either way it was not a lot of samples for 13 varriables). Not surprisingly, none of the samples worked, but that wasn't the point. Once the computer crunched the data, we built up the "optimal" result and it worked perfectly. This was a problem that dozens of engineers could not solve with their best efforts in measuring, tweaking, designing, and guesing, but a 2 day experiment solved everything.
Now the drawbacks, most DoE trials use linear interpolation and assume straight line relationships betweeh the high and low test points used for each varriable. 3 and 4 point per variable tests are possible, but the required samples begin to increase dramatically. Surprisingly, this is rarely a problem, but EG may be a bit unique. The other problem is the requirement to break the test down into individual independant variables. While this may not be possible for some of the agregate tests, the "other" factors like epoxy, and addins are certainly a candidate.
Finally, the most obvious problem is actually creating and testing 15-20 or more unique sample formulations (preferrably with 5 individual specimens each)
If anyone is thinking of an "extensive" test procedure, this is something to look at because you can get a lot more bang for your experimenting buck by correctly choosing which variables to change with each test.
John
greybeard John,
Spin casting could indeed preload your castings in compression thus making it the "orbis canis" (My engineering concentration was in mathematical statistics, I didn't take latin. ). It also true that the paper by B.W. Staynes suggests that E/G should be cast under pressure so you have something.
The intent of my comment however was to suggest that the material with the 37-127/37-606 in my test was kind of like a hockey puck: not truly rigid but at the same time pretty hard. It wouldn't be optimal for a precision machine structure but it would make a nice base if the parts placed on top were more rigid.
sigma John,
The idea of careful experiment design is an excellent one. We have an excellent model of the aggregate packing so the variable for aggregate in an actual material is probably Phi, the aggregate packing density. While I would not claim to be a crack statistician, I completely understood the example in the wikipedia article and would be capable of employing such techniques if I were to study some.
There is a huge list of factors that affect E/G. I listed the ones I had identified all the way back at <A href="http://www.cnczone.com/forums/showpost.php?p=306963&postcount=1485"> Post 1485</A>. (I'm not psychic: I used the index thread.)
Briefly speaking, I think the following are the likely model variables:
Epoxy Modulus
Epoxy Strength
Phi (Aggregate Packing Density)
Aggregate Modulus
Aggregate Strength
Void Percentage
Bonding agent concentration
Catalyst concentration
Nano-reinforcement concentration
Epoxy probably breaks down further into:
Epoxide Equivalent Weight of Resin
Amine Hydrogen Equivalent Weight
Mix Ratio
Cure Temperature profile
Post Cure Temperature Profile
The 5 samples requirement won't be too bad for me as the mold I have machined out of UHMW produces a block that is sawn into five samples. In short, I have already tested 4 formulations if you count pure epoxy so the lots and lots of tests problem probably isn't going to be an issue.
Regards all,
Cameron
Hi John.
I think this is the nub of the problem with EG, and from two standpoints.
In your own example, buying springs from Acme Springs Inc. or Zebedee International should not make a difference to anyone trying to reproduce your clutch design, an assumption being that springs will always be used in the linear part of their behaviour.
But in the EG, the ability of a mix to absorb energy is, I would guess, non-linear in the extreme, depending on several unpredictable or non-linear variables.
The general angularity of the aggregates, their relative fracture strengths, and the packing density achieved, the first two being dependant on locally sourced material, and the last being geometrically non-linear.
The ratio of epoxy to aggregate will also have a non-linear effect (akin to the effect of adding water to dry sand), moving from its adhesive properties to becoming the vehicle for fracture propagation.
I think the best we might hope for is that Cameron's testing will give us good pointers to what might be achieved regarding ultimate EG strength in terms of particular recipes, along with a guide to what improvements are possible in terms of additives.
Regards
Greybeard John
The above was written before I read Cameron's posting, but I think you get the layman's drift.
Also thanks for the wiki link. I'm going to add the "milk in tea" reference to my list of important trivia.
It's like doing jigsaw puzzles in the dark.
Enjoy today's problems, for tomorrow's may be worse.
greybeard
I am concerned with the nonlineatity of the solution as well and am a bit troubled by the what it implies. This came up briefly when I was in school and the professors reply was something along the lines of "You are correct to be concerned about the linearity assumption, but the linearity implied is between each pair of variables, not for the solution as a whole. Almost every 10+ varriable system is "nonlinear", but reasonable approximations can be made between each pair of variables." He went on to explain that unless there is a major inflection point centered between the data points (ie your test points lie on either side of a peak), you are unlikely to ruin the entire solution. This is because every pair is evaluated and effects are summed, if an inflection point is left out, the slope of the line between the two variables will be lower than the real system and this variable pair will have less influence on the final solution.
I did not make a big deal about the 3 and 4 point tests, but those will break up the solution into smaller linear segments and a look at the results showing a V shape in the influencing curve indicates a nonlinearity. That brings me to another point I forgot, the individual influencing curves between each pair of variables can be summed and viewed to give a quick refrence as to if a particular variable has any significance in the results. If the slope is flat or low, it does not affect the results anc can be left out of future trials. It is still a good idea to look at each of the variable pair graphs however as there could still be one pair that has an effect. The results are often all plotted on a single graph for quick comparison of relative effects.
John
John - thank you for your thoughtful reply.
Reading it I realized that...
... perfectly encapsulates what was in my mind re the recipe.
I would think that the effects of most of the additives that are being suggested will be handled by the methodology you are suggesting. It's all the grit in between that worries me
From the beginnings of this thread I've been particularly intrigued by attempting to visualize the packing problem, and how we might achieve the maximum strength by tweaking the aggregate ratios/sizes.
Up to present I've managed to get a static mental picture of a number of different sized particles filling up a 3d space, of smaller and smaller spheres jambed in the ever smaller spaces left by larger ones.
What I'm having a problem with now is a more dynamic picture of how the scene changes as a few too many small particles start to push the larger ones apart. Thus, as this population increases, at some critical point the larger particles are in a sense lubricated, and crack propagation flows around them with little loss of energy.
I know the analogy is a bit sloppy, but I hope the picture conveys my point, because this is what I see as the "major inflection point" mentioned by your prof.
To confound the problem, this situation will be repeated for each of the pairs of different sizes of aggregate in the recipe.
Regards
Greybeard
It's like doing jigsaw puzzles in the dark.
Enjoy today's problems, for tomorrow's may be worse.