588,320 active members*
4,821 visitors online*
Register for free
Login
IndustryArena Forum > Mechanical Engineering > Epoxy Granite > Epoxy-Granite machine bases (was Polymer concrete frame?)
Page 140 of 253 4090130138139140141142150190240
Results 2,781 to 2,800 of 5053
  1. #2781
    Join Date
    Apr 2007
    Posts
    777
    ad_bfl,

    .08mm -> 10mm is the range of sizes used in the curve fits by de larrard in the book.

    Looking though the book,

    For round items, beta_i looks between .593 and .632 for round and between 0.523 and .585 for crushed. I've used Beta_i's that were too big in my simulations recently. I believe I used .72 for round and .62 for crushed.

    Regards all,

    Cameron

  2. #2782
    Join Date
    Dec 2004
    Posts
    524
    My understanding that beta is the proportion of volume that is filled when the material is put in a container. Is that the precise definition? Is there a standard way of measuring it?

    Is there a way that we could measure it? I'm thinking something like take a 10 cm x 10 cm x 10 cm box and fill it with the material. Add water without overflowing. Then beta is (1000 - the # of ml of water added)/1000.

    There are some missing details:

    How much do we shake the container after adding the material?

    Regard to all,

    Ken
    Kenneth Lerman
    55 Main Street
    Newtown, CT 06470

  3. #2783
    Join Date
    Apr 2007
    Posts
    777
    Ken,

    Water changes the dynamics. Beta is the theoretical maximum density, not the actual density. Measured values are subject to the container wall effect and other constraints where as beta is corrected to infinite volume.

    According to the book (pg 225):

    Dry aggregate is placed in a cylinder with diameter at least 5 times the maximum aggregate diameter and vibrated for 2 minutes at an acceleration of 4g with a piston on top applying 10KPa of pressure.

    The actual packing density for aggregate of a single average size is computed from the following equations:

    Phi=4*M_d/(Pi*d^2*h*rho_d)

    Where Phi is the actual packing density, M_d is the mass of dry aggregate, d is the diameter of the container, h is the final height of the aggregate in the container and rho_d is the intrinsic density of the aggregate.

    Beta_avg=(1+1/K)*Phi
    K is packing process value which corresponds to 9.0 for the test procedure given above.


    The value for Beta_avg must be corrected for container size of the sample and thus

    Beta=Beta_avg/(1-(1-k_w)*[1-(1-d_ad/d_cyl)^2*(1-d_ad/h)]

    where k_w is the container wall effect coefficient, .88 for round grains, .73 for crushed grains. (This can be measured too with a huge measurement regimen), d_ad is the diameter of the dry aggregate and d_cyl is the diameter of the cylinder.

    For an aggregate type with subtypes, there is a slightly longer equation but this info is so mathy it needs to be in the latex paper where I can write decent math symbols.

    In short, yes, beta can be measured and it can be measured with simple instruments that any of us have given the discovery of a suitable vibration table to set the acceleration on the loaded cylinder to 4g.

    --Cameron

  4. #2784
    Join Date
    Jan 2008
    Posts
    48
    I've been lurking for a while, but have read every last post in this thread (and most long threads on the zone for that matter) and have decided to surface. I am a young mechanical engineer with no immediate plans to build an EG (or what ever acronym is preferred) base due mainly to logistics (small east cost apartment). Physical limitations however haven't limited my interest in this topic and figured that more opinions can't hurt.

    While I am very much into the mechanical aspect of my profession, I fully understand the challenges of optimizing a matrix of this size. Most of my programming experience has been in MATLAB which fortunately is adept at purely math based programming like this. While I am not even going to propose rewriting what has been done, I have to agree with ad_bfl that the bigger hammer approach may be just what is needed at this point in the game. A 1% increment solution of all 4 and 5 component should not be taxing to a modern PC, or small fleet thereof (I have 3 modern dual core windows machines with nothing to do 22 hours a day if it turns out we need to farm out regions of the solution, or want to run .1% searches). Excel can't handle more than 2^16 rows of data or 2^20 in 2008 which few of us probably have. I remember looking into this issue a while ago and found a free spreadsheet program that could handle more than this, but the name escapes me at the moment. While a database is probably the correct way to manage this volume of information, a simple spreadsheet would allow the data to be sliced and diced by any parameter including the cost and density. An added bonus is we don't have to worry about choosing the wrong influencing coefficients of non critical parameters because all solutions are present. To find what we are looking for, simply sort the results by packing density which will give a good feel for how many solutions there are within a few percent of the "best" (with the caveat that this is a numerical solution with a finite accuracy) packing density. From there, mixtures with a high likelihood to separate can be culled and cost data can be viewed.

    Checking the stability of the mixture to determine how much care is needed in production would require trivial sorting for all solutions within a few percent of the chosen solution.

    I understand the bug you can get when writing software, wanting it to efficiently calculate the optimal solution, but this is a DIY forum and CPU time is free. By no means am I advocating stopping the optimization software as it will make a much simpler package to use, but I'd have to think a quick modification of the simulation program could knock out a 1-2% coarse simulation in reasonable time. Additional constraints like limiting volumes to 3%-40% of the mixture and forbidding usage similar sizes should help reduce results to a reasonable number. All this talk about programming is giving me a hankering to drag out my old copy of MATLAB and put together something that generates all the solutions and does some basic post processing, but I’d like to hear what the pros have to say about the big hammer approach first.

    OK, on to another issue that keeps coming up, but I don’t feel has been given enough credit which is the overall modulus of the final design. Styrofoam pellets have a ~98% virtual packing density once processed, but it doesn’t mean jack. I agree more is better, when it comes to packing density, but once we hit the mid to high 80% range, I feel more care needs to be taken with what aggregate is used. Unfortunately this is going to be an empirical process where we can only make general statements about the mixture, but I feel it is an important component. While a high modulus can be at odds with the natural vibration damping of the material, I think we can deal with a slightly increased natural in return for more stiffness. We can only remove so much cost form the formula thru displacement of epoxy with aggregate. On the other hand, increases in modulus lead to a direct reduction in material requirements which have the potential to double based on the results I’ve seen compared to published numbers.

    Sorry for the length, I have a tendancy to be wordy.

    John

  5. #2785
    Join Date
    Jun 2005
    Posts
    1436
    Cameron - well, I went to the link you posted on optimization....... Ouch !
    It was like entering a foriegn country where though there seemed to be familiar territory ahead, a lot of the dialogue was just out of reach.

    "And the women in the uplands diggin' praties, speak a language that the strangers do not know..."came into my head, so I quietly packed my tent and stole away.

    Regards to all,
    John
    It's like doing jigsaw puzzles in the dark.
    Enjoy today's problems, for tomorrow's may be worse.

  6. #2786
    Join Date
    Jun 2005
    Posts
    334
    Quote Originally Posted by sigma relief View Post
    While a high modulus can be at odds with the natural vibration damping of the material, I think we can deal with a slightly increased natural in return for more stiffness. We can only remove so much cost form the formula thru displacement of epoxy with aggregate. On the other hand, increases in modulus lead to a direct reduction in material requirements which have the potential to double based on the results I’ve seen compared to published numbers.
    John,
    This statement was like a hammer to forehead!

    Hello! and welcome to the group!

    In all the work to achieve a high packing density, it never dawned on me that this would work against us.
    In the video (post 2774 thanks Quad2T) on YouTube (3:00) you'll see a quick picture of a close up.

    It has a lot of epoxy in it and from other pictures I've seen, they don't appear to worry about packing density, from the obvious use of very large aggregates; 1 inch down to 50 microns. The amount of epoxy interstitially is more than one would normally expect when binding something together. I would characterize the aggregate as floating in the epoxy.

    I realize what we want to achieve, high aggregate density and a lower the cost of epoxy, with the added benefit of higher modulus.

    If the pro's don't seem to care... should we? Are we repaving an old road?

    Food for though... back to writing the Simulator UI.

    Jack

  7. #2787
    Join Date
    Apr 2007
    Posts
    777
    I'm behind right now but I'm working on the simulator. It's running right now.

    In response to sigma_relief and Jack, I am under the impression that minimum epoxy will both improve modulus and creep properties of the designed material.

    I did a graph of the rule of mixtures calculation in <a href="http://www.cnczone.com/forums/showpost.php?p=291506&postcount=1082"> Post 1082</A> While I said strength in this post which was wrong, it is normally correct to apply the rule of mixtures for modulus as we are discussing.

    sigma_relief, does this look right?

    Regards all,

    Cameron

  8. #2788
    Join Date
    Jan 2008
    Posts
    48
    I was doing some thinking last night as how to implement such a brute force method and how long it would take to calculate. I figured the easiest way to run the calculations would be to generate an array of all reasonable mixture percentages of an arbitrary 5 component mixture. For starters, bounding this to 50,000 to 100,000 elements would be possible assuming a ~2-3&#37; graduation of sizes and a limitations of 5-35% of any one component in the mixture.

    From there, a similar array could be set up with all reasonable choices of ingredients. Either a rule can be added so that each smaller aggregate is within a certain percentage of the larger, or fields can be added to the material properties array to include the minimum and maximum number of smaller sizes to skip when choosing the next smaller component. This allows us to include all of the specialty ingredients at the bottom of the size range while preventing evaluation of mixtures using 3 “large” aggregates. The selection criteria for this array are not as straight forward for the creation of this array because each solution should have a broad size distribution, but a semi-intelligent parameter creation loop combined with a few go/no go tests at the end should create a list of between 5,000 and 10,000 mixture choices.

    This array of mixture selections could probably be fed into the simulation program the way I understand how it works, but I am unaware of the processing time of that simulator. As it stands, there are a few hundred million to a few billion possible outcomes of such a simulation. For one machine to get this done in a month, it would need to generate 100 results a second. I am still trying to decide how best to implement the equations in Cameron’s paper, but because we are only solving for equations with known variables, an efficient bit of code that scans in a set of percentages as well as a set of mixture variables and crunches the numbers should be able to attain a decent throughput. While a month is still a long time for one PC to crunch numbers, the list of mixture choices can be broken into manageable size with a few hundred combinations and split among machines. I saw a mention that the current optimization code breaks up each element into multiple sub elements to account for the range of particle sizes in each bin. This will greatly increase processing, but I think it is the only way to go.

    As for what to do with all the results, there is no one option. A local post processing to pull out all “bad” mixtures would help reduce the amount of data one machine has to process once it is merged, but defining bad is fuzzy. I think a reasonable minimum packing density and S value can be agreed upon, but it is a case where we have no idea what percentage of results will fall out. To keep the merged results manageable, we should probably be somewhat ruthless with the choice of minimum packing density because we are looking for all of the peaks in the solution area, we don’t much care for the middle ground. The unfortunate side effect of this is we loose sensitivity data. This data exists in the resulting array of all mixture ratios for a given set of components, but would require some detailed, but straightforward lookups for all mixtures within a set of ingredients. It may be useful to have a process which identifies the 5-10 “best” results from each batch of ingredients and looks up all mixtures within ~2% of them to look for minimum packing ratios. I would be willing to bet that most results will be closely related, but that would add credit to a coarse 2-3% mesh. If only 10 results are reported along with sensitivity data of some form, we would have <100,000 data points, of which quite a few could be removed because they are merely the best of a bad option. This would leave enough results that a garden variety spreadsheet can be used to view the results and do calculations such as cost, density, and such. While we would never be able to optimize for these parameters because much of the data is lost, I’m sure a reasonable minimum could be found in the data.

    Unfortunately this whole thought experiment leads us to a pile of unchangeable data, users can’t simply tell the program what they have and get a result which is the ultimate goal. With that said, at least a few modules of such a piece of code would be helpful. I think a sensitivity analysis of our current data would be in order just as a sanity check to make sure we didn’t stumble into a more expensive and possibly more difficult to mix recipe for a negligible gain in density. If we had the full mixture results form even a few of the proposed ingredient lists that have been mentioned, we could see the interactions of cost and density just to get a feel for how far from optimal we can move.

    Cameron, I would be interested to know the details of the process you use to break each nominal size aggregate down into more representative components. If you already stated this, I must not have fully appreciated its impact at the time.

    Please forgive my use of the word “we”, I do not consider myself a member of this excellent thread, but I have been reading since mid November when I stumbled across the Zone.

  9. #2789
    Join Date
    Apr 2004
    Posts
    41

    How about this

    How about this theory,

    The wetter the better? , Epoxy rich mixture to ease in mixing and also will let the air out
    better, then the vibration compaction is where you get your packing density and
    just scoop off the excess epoxy from the top. Or you can soak up the excess
    with some dry mix.

    The old heavy particles settle to the bottom theory.

  10. #2790
    Join Date
    Jun 2005
    Posts
    334
    Let me preface by saying; I have never used any heavy aggregates; that is larger than 4mm.
    I've never seen any segregation in epoxy, even when I tested a really wet mix.
    Why? I believe the viscosity and pot time are enough to minimize segregation.

    As I've noted before, the pro stuff is really wet. See the YouTube video above about 1:21, upper left hand side you'll see a bucket dumping the mix. For a few frames you see a very wet flow. It's not like anything I've poured recently.

  11. #2791
    Join Date
    Jan 2008
    Posts
    1

    You might want to check out the following German website: www.thomas-zietz.de. They are offering a mineral-epoxy based machine for hobby use, and it's good for machining steel. The construction and philosophy of the machine was discussed in minute detail on the German forum "Peters CNC-Ecke". As I recall, the epoxy content is less than 10%.

    I'm definitely sold on this method of machine construction, and I'll try it out, but first I gotta build a largish gantry type machine for wood processing.

  12. #2792
    Join Date
    Jun 2005
    Posts
    1436
    Welcome Krister.
    You'll find quite a few links to that site earlier in the thread.

    John
    It's like doing jigsaw puzzles in the dark.
    Enjoy today's problems, for tomorrow's may be worse.

  13. #2793
    Join Date
    Apr 2007
    Posts
    777
    KristerW,

    Welcome. Thomas Zietz posts here occasionally when we ask him questions.

    There are a lot of questions about what is absolutely necessary to make E/G work. I've taken the approach to design the best possible E/G using engineering theory so it may be a long time before what is absolutely required gets answered.

    It appears most mixtures using a wide range of sizes will work acceptably. I especially and others are interested in understanding the entire solution space, not just getting a machine built. As a result, we delve into the academic research on the matter seeing if we can prove the optimal solution. Once I'm convinced of the simulation results, I will be performing flexural tests using an Admet flexural tester.

    Due to fracture toughness considerations, a few papers I've seen referenced suggest that large aggregate makes mixtures weaker. A well graded mixture with large aggregates is however easier to get to a high packing density.

    I'm going to leave it to the optimization guys in the near term deciding what to do with the equations. I'm tweaking on my simulated annealing solver while realizing that the problem is complex. Aggregates can be broken down into subfractions by using the sieve analyses presented by the manufacturer (check datasheets for quartz at www.agsco.com.

    Regards all,

    Cameron

  14. #2794
    Join Date
    May 2005
    Posts
    81
    What about starting with the "overall virtual packing density" for a mixture as a starting point (top of page 2) for a brute force search?






    Quote Originally Posted by sigma relief View Post
    I was doing some thinking last night as how to implement such a brute force method and how long it would take to calculate. I figured the easiest way to run the calculations would be to generate an array of all reasonable mixture percentages of an arbitrary 5 component mixture. For starters, bounding this to 50,000 to 100,000 elements would be possible assuming a ~2-3% graduation of sizes and a limitations of 5-35% of any one component in the mixture.

    From there, a similar array could be set up with all reasonable choices of ingredients. Either a rule can be added so that each smaller aggregate is within a certain percentage of the larger, or fields can be added to the material properties array to include the minimum and maximum number of smaller sizes to skip when choosing the next smaller component. This allows us to include all of the specialty ingredients at the bottom of the size range while preventing evaluation of mixtures using 3 “large” aggregates. The selection criteria for this array are not as straight forward for the creation of this array because each solution should have a broad size distribution, but a semi-intelligent parameter creation loop combined with a few go/no go tests at the end should create a list of between 5,000 and 10,000 mixture choices.

    This array of mixture selections could probably be fed into the simulation program the way I understand how it works, but I am unaware of the processing time of that simulator. As it stands, there are a few hundred million to a few billion possible outcomes of such a simulation. For one machine to get this done in a month, it would need to generate 100 results a second. I am still trying to decide how best to implement the equations in Cameron’s paper, but because we are only solving for equations with known variables, an efficient bit of code that scans in a set of percentages as well as a set of mixture variables and crunches the numbers should be able to attain a decent throughput. While a month is still a long time for one PC to crunch numbers, the list of mixture choices can be broken into manageable size with a few hundred combinations and split among machines. I saw a mention that the current optimization code breaks up each element into multiple sub elements to account for the range of particle sizes in each bin. This will greatly increase processing, but I think it is the only way to go.

    As for what to do with all the results, there is no one option. A local post processing to pull out all “bad” mixtures would help reduce the amount of data one machine has to process once it is merged, but defining bad is fuzzy. I think a reasonable minimum packing density and S value can be agreed upon, but it is a case where we have no idea what percentage of results will fall out. To keep the merged results manageable, we should probably be somewhat ruthless with the choice of minimum packing density because we are looking for all of the peaks in the solution area, we don’t much care for the middle ground. The unfortunate side effect of this is we loose sensitivity data. This data exists in the resulting array of all mixture ratios for a given set of components, but would require some detailed, but straightforward lookups for all mixtures within a set of ingredients. It may be useful to have a process which identifies the 5-10 “best” results from each batch of ingredients and looks up all mixtures within ~2% of them to look for minimum packing ratios. I would be willing to bet that most results will be closely related, but that would add credit to a coarse 2-3% mesh. If only 10 results are reported along with sensitivity data of some form, we would have <100,000 data points, of which quite a few could be removed because they are merely the best of a bad option. This would leave enough results that a garden variety spreadsheet can be used to view the results and do calculations such as cost, density, and such. While we would never be able to optimize for these parameters because much of the data is lost, I’m sure a reasonable minimum could be found in the data.

    Unfortunately this whole thought experiment leads us to a pile of unchangeable data, users can’t simply tell the program what they have and get a result which is the ultimate goal. With that said, at least a few modules of such a piece of code would be helpful. I think a sensitivity analysis of our current data would be in order just as a sanity check to make sure we didn’t stumble into a more expensive and possibly more difficult to mix recipe for a negligible gain in density. If we had the full mixture results form even a few of the proposed ingredient lists that have been mentioned, we could see the interactions of cost and density just to get a feel for how far from optimal we can move.

    Cameron, I would be interested to know the details of the process you use to break each nominal size aggregate down into more representative components. If you already stated this, I must not have fully appreciated its impact at the time.

    Please forgive my use of the word “we”, I do not consider myself a member of this excellent thread, but I have been reading since mid November when I stumbled across the Zone.

  15. #2795
    Join Date
    Apr 2007
    Posts
    777
    I've been running the sim from various starting points but using the minimum the difference between the k_i criterion, the sim seems to keep converging to a similar and reasonable set of numbers. Unfortunately, the sim I have now is working from on the 27 subcomponents, not the 6 named components. I think I started it from the 1/27th of each component starting point last time and let it run for a million iterations which took under an hour.

    I still owe folks lots of data. I've attached the octave program I've been using to get the optimum values of the named components from this distribution recommended by the CPM simulator. This has the aggregate subcomponents in use. The value b in the matrix is replaced with the subcomponent outputs of the CPM simulator and it uses simplex to get optimal components. This doesn't seem to affect Phi much but it does mess up the equal K_i's.

    Regards all,

    Cameron
    Attached Files Attached Files

  16. #2796
    Join Date
    Jan 2008
    Posts
    48
    Quote Originally Posted by ckelloug View Post
    I did a graph of the rule of mixtures calculation in Post 1082 While I said strength in this post which was wrong, it is normally correct to apply the rule of mixtures for modulus as we are discussing.
    My Non Linear materials modeling has been primarily with laminar composites, The equations are the same for most of them (laminar composites that is), but my hands on experience has been Formula 1 grade Carbon Fiber layups. The composites equations all behave similarly in that the epoxy has a much lower modulus than the fiber but you do not start seeing improvement until you are at least 50-60% fiber content (glass fiber is an exception to this). I was fortunate to work with George Staab http://search.barnesandnoble.com/boo...50671248&itm=1 in modeling composite materials, but at the time, I blew off all of the more isotropic short fiber and particulate formulas as not applicable to my interests (they were outside of his research as well, but I will try to flip thu his book and see what he said about them). I have no experience with the rule of Mixtures you are referring to, it looks elegant, and from my experience with composites, possibly too elegant. The laminar composite formulas for overall modulus consisted of a 6 by 6 matrix of 36 non linear equations that had to be solved simultaneously for a result of a single ply layup. I am not sure how the rule of mixtures is applied to formulations with more than 2 components, but there appear to be two options. One is that a component with a higher modulus will not be efficiently utilized by the matrix if its percentage is low because there is not enough of it to constrain the mixture, the load paths are simply re-distributed around it. The other option is that there is an averaging effect and a the benefits, while not 1 to 1, will be seen in the final product.<?xml:namespace prefix = o ns = "urn:schemas-microsoft-comfficeffice" /><o></o>
    <o></o>
    Even if the first option is true, the rule of mixtures demands a high aggregate density and at the upper 80% range, small density increases have a large effect on modulus. The fact that two very different mixtures previously proposed have similar packing densities indicates that we have a lot of flexibility to get reasonable densities. If a near optimal packing density is a trivial solution for any reasonable choice of ingredients, then I feel the selection of those ingredients aside from size may become the next area we need to look at.<o></o>
    <o></o>
    My rather poor logic and layout for a brute force global solver shows my lack of programming and "specialty math" training (to think 3 years of collegiate Differential Equations gets me no where on such a solver is humbling). I have no doubt the ideal case solver will work as advertised, but adding external constraints will take this out of the mathematical proof realm and back into engineering where we need hard numbers to plug in. Cost will be a nice optimizing parameter, but it is really the stiffness that needs to bo optimized because it is the sole parameter that drives how much to buy for a given machine. Earlier derivations of beam deflection under load show how critical this is if you are building from specs backwards to a machine. Unfortunately, I don't think we can calculate stiffness beyond the Rule of Mixtures without some testing, but the question becomes how do we calculate what to test. The solver as I understand it will spit out Ideal density and ideal density/cost recipes, but we will be back to the "manual" calculations for any variations on these which makes testing anything other than the ideal baseline just as difficult as it is now. Results from the original optimizer using 20 to 30 hand chosen lists of ingredients with proper overall grading, but different intervals and ingredients could tell us if solutions within 1% or so of the "optimal" 88% (or what ever the final number is) are trivial. If this is the case, optimization aside from pure density may be easier to calculate with a bit of programming finesse standing in for advanced mathematical optimization.<o></o>

    Now that I have talked myself through a great big logic circle that went nowhere, I will shut up and wait to see what magic the programers in the crowd pull off.

    John K.

  17. #2797
    Join Date
    Jun 2005
    Posts
    334

    Metal powders on eBay

    Just got some Aluminum, Copper, and Brass powders from finepowder on eBay.
    This guy is first class all the way, so if you need some to make a Moglice substitute then look no further.
    He's got Molybdenum Powder as well, but ouch... market price on that is sky high again. Maybe he'd sell it in 8oz lots if asked. Glad I have my Z Moly powder.

    http://stores.ebay.com/FINE-POWDER-A...Q3amesstQQtZkm

    Wish he had Graphite and PTFE powder too.

    Jack

  18. #2798
    Join Date
    May 2005
    Posts
    2502
    John, the brute force method may not be such a bad idea.

    Look at it this way. If we want to mix n types of aggregate components, and we want to investigate that with an accuracy of x, we can figure the brute force by thinking of the two as a number.

    For example, if n = 6 components, and x = 10% (i.e. we want to know +/- 10% what % of that component to add), we get:

    10^6 = 1 million combinations

    Let's say we want the hobbyist's special simple mix. It can use up to 6 components and we want accuracy to 5%. That would be:

    20^6 = 64 million combos

    I would think a simple BASIC program or the equivalent could grind through that pretty quickly.

    These are exponential relationships, so there are limits. But such programs are pretty darned straightforward.

    A high quality implementation could sample the area around a chosen formula to see how critical the exact mixture is. One could accept a slightly less strong mixture if it is more tolerant of mistakes.

    I'm also all for the "slightly wetter mixture" concept.

    Putting all of this together with readily available ingredients would really accelerate the progress here I think.

    Cheers,

    BW

  19. #2799
    Join Date
    Apr 2007
    Posts
    777
    sigma_relief,

    It sounds like you've actually had some formal training in this. I'm sure my materials prof thinks I was sleeping through his entire class. . . The rule of mixtures is a simple model that links percentage of reinforcement to modulus in isotropic composites. I seem to recall not paying attention at the time. . . (I've never done what you have with the laminates. The only thing I know about laminates is to go look in the book.) Rule of mixtures is not the be-all and end-all of models, it's approximate and states that the stiffness is likely to lie between the red and green lines on the graph in <A href="http://www.cnczone.com/forums/showpost.php?p=291506&postcount=1082 "> post 1082</A> for a given composition.

    Rule of mixtures:
    <preformat>
    E_c_max=E_m*V_m +E_p*V_p

    E_c_min=E_m*E_p/(V_m*E_p+V_p*E_m)
    </preformat>
    where E_c is the modulus of the composite, E_m is the modulus of the matrix and E_p is the modulus of the aggregate, V_m is the volume fraction of the matrix, V_p is the volume fraction of the aggregate.

    I'm going to run flexural modulus tests using my Admet test machine now that Reichhold finally came through with epoxy samples yesterday as time permits.

    I was able to run a million iterations of the model with the simulated annealing solver in under an hour on my 2.4 Ghz AMD turion 64 mobile based machine.

    I can sympathize with you, sigma_relief, on the differential equations thing. My first instinct was to whip out a Lagrange multiplier calculation and then I'm like hey wait a minute: do these derivatives exist? I think the answer is they don't so I fled for numerical simulation without being too thorough, the thought of the partial derivatives on this gives me the willies.

    Don't worry about using the collective we here on the thread, I was once a lurker here too. But I then lost my lurker status and became an insane man on a mission and it changed my life. . .

    This whole thing lost mathematical proof realm as soon as I started feeding it data from actual aggregate. The reason that I'm using aggregates from agsco is they are one of the only places in the U.S. with sieve analyses for their aggregates and they'll also sell any size quantities to anybody. Under the hypothesis that optimizing and understanding the mixture matters, it seemed like a reasonable plan. Also, agsco can sieve aggregates to meet a distribution if we finally find something that works well enough we want to duplicate it.

    sigma_relief, don't worry about the lack of specialty math training. I didn't really have any training in this kind of problem either although working a couple years as software engineer for a mathematician helped. I just realized that optimization was what was needed and looked through the wikipedia examining optimization algorithms until I saw simulated annealing which I had been interested in since highschool and realized it was one of the few I'd seen that could take such an ill bounded problem. Guys like BobWarfield, ad_bfl, and lerman are the ones who actually know something about this optimization thing. This whole E/G exercise is a good way to keep engineering skill sharp.

    So far, it appears that driving the solution towards all the K_i being equal is working and getting very similar results regardless of the initial guess. There are a large number of solutions with similar real packing density, but I believe the simulation that minimizes the K_i difference is unique or at least a lot more unique than just a generic solution for a Phi value. It must be pointed out here that the item being modeled by the CPM is the real packing density, not the virtual packing density. This model has been calibrated by the French Laboratory for Roads and Bridges to actually correspond to within a few percent of the observed density of a really wide range of aggregate mixtures. This is why it is so complex and ill-behaved.

    It should also be pointed out that once you have the optimal mixture, you can make as wet a mixture as you want with it. The difference between an optimized mixture and a non-optimized mixture will be that that the optimized mixture will not want to segregate when enough epoxy is added for it to become a suspension. Optimizing the aggregate packing density as we are doing just affects the minimum epoxy required for the epoxy to fill all space between the particles. More will likely be needed in practice in order that the particles are all completely surrounded by epoxy.

    Finally, a good solver will allow all sorts of tweaking etc. I currently have a solver and model that I'm lucky it works at all. I need to start over on parts of it being a bit more careful and handling the linear algebra and materials models internally so that I can adjust named aggregates instead of subcomponents and thus see the effect of changing the named aggregates whose subcomponents we can't change the proportions of.

    Ultimately speaking, I'd like to have a model that corresponds with all sorts of physical properties of the material including damping parameters and color but that will have to wait until after we understand the simple stuff and show that it is correlating with reality.

    Regards all,

    Cameron

  20. #2800
    Join Date
    Jun 2005
    Posts
    1436
    Meanwhile, back at the ranch.....
    Does anyone have knowledge, theoretical or practical, of the relationship between &#37; filler and gel time ?
    Obviously there are a large number of variables that affect the relationship, but I wondered if anyone knows of any rule of thumb/ generalizations that might help.
    Most people may end up with some variety of quartz as the main filler type, and a low viscosity resin, but that's probably about it in terms of common denominators.

    With large % filler present, the exotherm is going to be low, so heating the whole thing up may be a good way to go, as has been suggested already.
    However I should like to gather what info there is out there before I start thinking about heater chambers for my spinning molds
    John
    It's like doing jigsaw puzzles in the dark.
    Enjoy today's problems, for tomorrow's may be worse.

Page 140 of 253 4090130138139140141142150190240

Similar Threads

  1. Replies: 71
    Last Post: 08-25-2020, 01:18 PM
  2. Replies: 14
    Last Post: 11-13-2015, 02:57 AM
  3. Replies: 9
    Last Post: 01-15-2014, 11:39 AM
  4. Index to "Epoxy-Granite machine bases" thread
    By walter in forum Epoxy Granite
    Replies: 13
    Last Post: 12-02-2011, 05:45 AM
  5. Epoxy-Rice Machine Bases (was Polymer rice frame?)
    By mdierolf in forum Mechanical Calculations/Engineering Design
    Replies: 18
    Last Post: 11-02-2008, 04:16 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •