Excellent slp_prlzys
Especially like the Circular Interpolation bit, just what I been looking for.
Many thanks
Regards
Sean.
Originally Posted by slp_prlzys
Excellent slp_prlzys
Especially like the Circular Interpolation bit, just what I been looking for.
Many thanks
Regards
Sean.
Originally Posted by slp_prlzys
Thanks for the input Plexer.Originally Posted by plexer
This is exactly my point!
Either the GCode to Step calculations are done on the PC or they are done on the breakout board.
If they are done on the breakout board, that board will need to have a fair amount of muscle ( = expense)
If they are done on the (M$) PC and sent to the breakout board which then simply takes them (the individual step instructions) and steps the motors (in real time as seen from the perspective of the breakout board based on the receipt of instructions from the PC), then we are at the mercy of the Mr Gates and real "real time" can only be dreamed of because if the M$ OS decides to do something else, the output to the Port is delayed and que'd and when the OS is finished, the complete data que is dumpred to the port as quickly as the port can handle it. Result = step timing screwed!
On the other hand, if the (M$) PC creates the individual step instructions and sends them to the breakout board (as fast as the PC, Port Baud rate etc will allow) and they are que'd/cached on the breakout board, and the breakout board releases the instructions to the motors at the correct intervals, then the timing problem is solved.
QED
Sorry Guys - Duplicated Submission
Hey AndyOriginally Posted by andy55
There are many commercially available systems out there which are pretty decent (and a number which are not). I think the idea should be to make a "FREE" alternative that will solve the problems.
Does anyone know what language (perl or whatever) the EMC app is written in?
To solve the timing problem, even EMC has to run on a special tweaked version of Linux.
Can anyone imagine Bill Gates giving us access to the M$ source so that we can tweak it? Not in my lifetime!
Hey Sean,Originally Posted by CLaNZeR
In the "Process GCode on the breakout board" option, it is the G02 and G03 commands that have me scared. From my limited knowledge of PIC programming, when you start using floating decimal arithmetic, the poor PIC has to really jump thru hoops and the EFFECTIVE speed of the processor is greatly reduced as it now has to run thru a number of algorithms (the floating decimal subroutines) to do the math.
Scary Stuff!!!
Aubrey
aubey,
did you look at the development boards or the control boards.they are module based for fast prototyping.they have memory and realtime clock onboard.could the onboard clock be used to overcome the windows delay problem.
mike,
when you do things rite,
people won't be sure you've done anything at all.
Hi Gerry,Originally Posted by ger21
Begs the question "Why hasnt someone done it yet?"
Also, still relies on the timing being triggered by the M$ PC from what I can see.
Best
Aubrey
Time to lay it out so that everyone can see what I am getting at.
Givens:
1) Both Micro$oft and to a lesser extent, Linux, operating systems to a lesser or greater degree do not always set the port pins at the exact moment that the application wants them to.
2) EMC has to be loaded on an optimised OS installation but with Micro$oft there are no other options.
3) Eratic timing can cause skipped steps, tool gouging and a host of other problems which can mean wasted money when looking at our time, effort, doctors bills for high blood pressure and material cost of the workpiece.
It is also accepted that :
1) There are a number of solutions out there that overcome (to a greater or lesser degree) the problem. Unfortunately, most of them cost money.
2) There are a number of ways to overcome this problem but as a hobbiest, I would like to be able to say that I was a member of the GROUP that came up with this particular solution AND THE DARN THING WORKS WELL!
For the purpose of getting clarity on where I am comming from and what I think our goal should be, the attached document should help.
The thinking is as follows:
1) As everyone has thier own preference as to CAD Packages, this wheel does not have to be re-invented. We will use the GCode output from the CAD Package as the source.
2) The GCode generated by the CAD Package will be retrieved by our StepMaker software and computed to a string of 2 letter words which will be stored to a pre-processed file on the disk drive. This step is necessary especially for those who have slower computers in thier workshop. Some of the G02 and G03 commands may take the PC longer to step calculate which could mean that the mill will have finished the previous steps before the PC has calculated where the next step/s are to take it. This "Buffer Underrun" scenario could also be the reason for erratic or problematic behavior on some mills.
As the pre-processed file contains no itterations or loops, it may be massive.
Imagine if you were to store each step command that is sent to your mill when you run your favorite GCode file. This is EXACTLY what the pre-processed file contains!
3) Now we take the pre-processed step instruction file and run it through the breakout board feeder application. The Feeder App sequentially reads the file from start to finish and sends it to the breakout board. There will of course have to be protocols set up so that the breakout board can pause the stream so that an overrun situation is prevented.
Now, What happens in the Breakout Board?
First lets look at our 2 byte "word".
If we put them one behind the other, we effectively have a bitstring 16 bits long.
Using the first 2 bits, we have 4 options available. ie. 00, 01, 10, 11
Assume that process branching happens as follows:
00 = output the last 10 bits to the motor driver pins (assuming 5 motors with step and direction)
01 = output the last 4 bits to the auxiliary relay pins (assuming 4 relays with on/off capabilities)
10 = "Canned Procedures". Here we can zero any axis, change timing rate for steps to be sent to the motors or anything else that we may decide to incorporate into the software design.
11 = "Send to LCD Display" ie. "1100000000000000" should clear the LCD Display, "110000000100000" should put the character "A" on the LCD.
Now, what does the board do?
PIC1 is busy listening for traffic on the connection to the PC.
The first thing it gets is a command which it interprets as "Is it OK to Start?"
PIC1 resets the board and starts receiving the data and storing it to the memory chips (4 in my drawing)
When the last memory is full (or the end of transmission flag received from the PC), PIC2 is told to start.
PIC2 queries PIC3 for the address of the next (in this case, first, command)
When PIC3 receives a memory location request from PIC2, the location is returned to PIC2 and the next memory location pointer is incremented in PIC3.
If the next memory location is on the next memory chip, PIC3 signals PIC1 to re-fill the memory chip that has just been finished.
In the meantime, PIC2 has retrieved the word from the memory location and processes it.
PIC2 then waits for the timing pulse and executes the command.
When the command has been executed, PIC2 queries PIC3 for the location of the next command and the whole process starts again.
You will notice a number of PICs in the drawing apart from PIC1,2 and 3. my thoughts are that a small (<$2.00) PIC is an easier way to impliment PWM and all the other good stuff than complicated boards. Want to change the way it reacts, change the program - infinite possibilities!
Comments Please
Best
Aubrey
Lets try the attachment again - Sorry ;>{
The G100 won't be available for another week or two, although you can purchase it in 2 piece form right now (see the website).Originally Posted by aubrey
There are alpha versions of Mach4 available that can run it right now. Not sure about the timing, but I think it's controlled by the G100.
Gerry
UCCNC 2017 Screenset
http://www.thecncwoodworker.com/2017.html
Mach3 2010 Screenset
http://www.thecncwoodworker.com/2010.html
JointCAM - CNC Dovetails & Box Joints
http://www.g-forcecnc.com/jointcam.html
(Note: The opinions expressed in this post are my own and are not necessarily those of CNCzone and its management)
Aubrey,
A lot of what you are talking about is over my head.
I do wonder if you are over-simplifying the requirements of a real machine in operation. Acceleration and deceleration are not performed in gcode. Feedrate override is going to wreak havoc with what you have already fed into the buffer. Feedhold, Estop, same problem. You will want to be able to pause the program at any instant, then recover and continue. Because these events all happen on the fly, they cannot be anticipated within a fully processed program path.
How about execution of one line of gcode per input? This involves Windows quite a bit.
Just add these to your list of things to do. But, carry on
First you get good, then you get fast. Then grouchiness sets in.
(Note: The opinions expressed in this post are my own and are not necessarily those of CNCzone and its management)
I saw this little guy a while back: the uM-FPU Floating Point Coprocessor. You can find it at the Parallax site. Uses SPI to communicate with your processor. It does the floating point stuff, plus you can program macros (formulas) into it. If you go to the Parallax site they have documentation on interfacing to their products and some programming documentation too. Hope this helps...Originally Posted by aubrey
Evodyne
Point taken - how are they handled currently?Originally Posted by HuFlungDung
I assume that they are handled within a specific GCode Command.
Also, does the Accelleration/Deceleration apply to ALL GCodes or only to the rapid movement commands?
If you look on the document, you'll probably notice an alarming abscence of a keypad! One will be needed!Originally Posted by HuFlungDung
If a wait for user intervention is required, all we need is a "Canned Command" that will pause PIC2 untill the operator has changed the tool or done whatever is needed.
If the PIC has to do all the calculations, in my opinion the situation will arise that the next step needs to be sent to the driver boards but the PIC has not yet worked out what the next step is. This will probably be just as bad as missing a step.Originally Posted by HuFlungDung
Originally Posted by HuFlungDung
Thanks (I think!) Please remember that this is not "My" project. If it was, it would be a non-starter because I definitely do not have anywhere near the expertise in all the required areas to pull it off. It will have to be a group effort if it had any chance to see the light of day.
But if it does....... It will be a great little achievement.
Hi Evodyne,Originally Posted by Evodyne
As far as I know, the Parallax product (Basic Stamp) operates under "Interpreter" mode which already slows it down.
Maybe I am wrong - does anyone know for sure?
This is NOT to say that it is a bad product at all ! !
Simply put, a standard algorithm will run faster as a compiled code than under an interpreter all other things being equal.
And to make things more interesting : I believe (not confirmed) that the Parallax Basic Stamp used some or other PIC as its CPU.
I have also heard (once again - NOT CONFIRMED) that this may have changed.
Please guys - DO NOT QUOTE ME ON THIS! It is what I have heard and is definitely subject to confirmation. I don't want to step on anyone's toes.
Aubrey
Aubrey wrote: We are dealing with the Micro$oft Window$ operating system.
Then 2 sentences later:
2. Come up with a solution to ensure that we, the humans, are not to be screwed around by the shortcomings of the operating system
I'm sorry to chant "Linux", but really cannot see any other choice without shelling out lots of money for a realtime system with expensive compilers and tools.
Some of you want to be part of a group with a better solution. There is such a group, you just have to join it: The Enhanced Machine Controller, or EMC.
The low level parts of EMC is written in C. That is: hardware I/O, stepping/servoloop, trajectory planning, kinematics, interpretation of G-code and low level execution of operator I/O, lots more. The user interface is written in other languages as far as I know.
It relies on RTAI for realtime service. RTAI is extensively used also in commercial controllers and PLC's and is well debugged and of course source is available for it. Do not underestimate the complexity of writing a realtime kernel or scheduler from scratch. It will be needed also for a microcontroller or things will get even more complicated. Realtime means actions will be executed to well defined time limits, it does not necessarily mean these time limits will be very small. So there is a limit for how many steps/sec a certain hardware can deal with regardless of software design.
If you need more speed than the PC hardware can provide, EMC have solutions to this: Use a hardware step generator made up from a programmable gate array, EMC has open and well defined software interfaces to make this simple. (I went to this solution to gain the speed I wanted). Or you can choose a hardware that controls servo motors equally easy. The drivers for these have already been written, so you just pick one that suits you. Should your requirements not be fulfilled by an existing hardware/driver, you can write your own. It will be a fraction of the work writing everything yourself, and you will all along benefit from the work other people do at other levels of the software.
I think you should only reinvent the wheel if there are only square or too expensive wheels existing. So I suggest you look closely at EMC to see if it will not be quite close to what you need.
OK, so I'm biased since I run EMC and have a good performer for a really low price. The PC running it would otherwise have been in the dumpster long ago, and the mentioned hardware cost me just over $100, and can do 300 000 steps/sec if I ever find drives that can swallow them that fast.
Einar
CLaNZeR:
You might also be interested in this too.
http://www.khwarzimic.org/takveen/helix.pdf
I was going to post the pdf itself but better from
the source.
mhel
"This is intentionally left blank."
The controller has to send steps at the proper rate for accel and decel.Point taken - how are they handled currently?
I assume that they are handled within a specific GCode Command.
Also, does the Accelleration/Deceleration apply to ALL GCodes or only to the rapid movement commands?
Accel and Decel are handled by the controller, and are independant (sort of) of the G code.
Any time the machine increases speed, or decreases speed, it needs to accelerate and decelerate. Motors can't go from 0 rpm to any given rpm instantly; they must accelerate.
Last time I mention this, but the Gecko G100 will handle all the accel and decel. You feed it coordinates and feedrates, and I believe it will do the rest, in hardware. I think that's the easiest way to go. It's not that expensive, either, imo.
Gerry
UCCNC 2017 Screenset
http://www.thecncwoodworker.com/2017.html
Mach3 2010 Screenset
http://www.thecncwoodworker.com/2010.html
JointCAM - CNC Dovetails & Box Joints
http://www.g-forcecnc.com/jointcam.html
(Note: The opinions expressed in this post are my own and are not necessarily those of CNCzone and its management)
The Gecko might do this, but it does not know what the other axes are doing.Originally Posted by ger21
So the path will not be correct unless moving only in a direction parallel to one of the axes. A move in one axis must be related to the others. Unless of course we talk about a machine like a coordinate drilling machine where only the endpoints of every move matters and not the path it takes between them.
So all G1 moves must be coordinated. For a G0 it may not matter (assuming no clamps or other obstructions). Some machines may even choose the fastest route on each axis for G0, making the move a dogleg path.
Yes, it does know what the other axis are doing. It treats each move as a vector made up of each axis component. I don't know how to explain how it works, but I'm pretty sure it does. Like I said before, there is plenty of info on this on the Geckodrive Yahoo group.Originally Posted by ESjaavik
Gerry
UCCNC 2017 Screenset
http://www.thecncwoodworker.com/2017.html
Mach3 2010 Screenset
http://www.thecncwoodworker.com/2010.html
JointCAM - CNC Dovetails & Box Joints
http://www.g-forcecnc.com/jointcam.html
(Note: The opinions expressed in this post are my own and are not necessarily those of CNCzone and its management)
@Ger: Sorry I confused G100 with the "old" Geckos. Just went over and browsed through the manual for it, and from that it looks like the G100 is the hardware that can be used to implement a multi-axis controller. But if it comes with the capabilities you mention, the manual does not say so. It refers to what it could do [given the right program], not what it will actually do as delivered. In fact it is built up very much like my system, except mine uses the PC CPU instead of the Rabbit microcontroller, and a separate board (from Pico-Systems) with the FPGA clocking out the steps. And the software is already done (EMC). So it looks like the G100 can do it if the software is finished. I don't know if it is, so can't comment on it's actual capabilities. Maybe there is a soft[/firm]ware manual somewhere?
It looks like the G100 have what it takes to make a non-continous move A->B without a complicated program. (It does the steps using the PGA hardware). It gets a lot tougher if you do not want it to stop at B, but smoothly continue in another direction to C, then to D, and so on. Near each corner, it must be continuously be reprogrammed to the new direction, taking into account how it may be allowed to divert from the correct path. Like you correctly wrote: it cannot instantly go from one speed to another.