I have been reading, reading and reading about alternate methods of getting step pulses to my G540 (methods other than a parallel printer port). I FULLY understand the problems of USB/Ethernet latency/timing. I also (sadly) understand the same problems exist with the parallel port with modern Windows OSs.
My small A4 router sits in a big box in my "game room" with a desktop sitting right next to it, why am I worried about a USB/Ethernet based controller for it? Several reasons that have NOTHING to do with my A4 :-) We have a lot of company visit us, and people see a PC and they want to check their e-mail, play a youtube video or check their facebook. That is just life in the 21st century. Currently I have two hard drives installed in the PC, one with Win7U for "guests", the other with WinXP for running my A4. But Win7U and WinXP don't "play nice" even on separate drives. WinXP views Win7U's access to its drive as an invasion, and on boot-up in XP it wants to scan the entire drive. This is an irritation that I can resolve, but it is an irritation none-the-less. (The main problem is I have almost everything disabled in XP, ethernet/USB, so I have to transfer files to the XP disk from Win7, then reboot in XP...)
More importantly, I do all of my CAD/CAM work on my Laptop, and I would really like an "easy" way to migrate that work to the A4 or A4 PC. Ethernet would be great for me, but anything has got to be better than my current situation. (I know, just build another PC exclusively for the A4, but I don't like that answer for aesthetic reasons.)
Finally, With the G540 and 12TPI acme screws about the best I can get my current machine to do without "missed steps" is 30ipm. It should do twice that, but it simply doesn't. It will run @ 60ipm for a good while, then that horrible noise and a ruined work piece, but this thread is not about tweaking Mach3.
The problem seems to be addressed in three distinctly different approaches:
1) Attempt to work around the problems on the PC side with drivers/DLLs etc. for a specific target controller.
2) Move the gcode resolution to a uController/FPGA removing the PC from the timing loop.
3) Develop a PC front end that resolves the g-code into higher level commands designed specifically for the target controller.
Each approach has strengths and weaknesses. #1 has the appeal of "universal acceptance" and backward compatibility with existing software/hardware. #2 & 3 are the most logical and straight forward, but are also the most likely to leave people "stranded" if the support goes South.
At the end of the day, uControllers//FPGAs are the MOST NATURAL choices for precision timing. Even modest uControllers are very comfortable in the nano-second time domain. The problem is that uControllers//FPGAs require complicated firmware, specially designed PCBs and typically proprietary PC applications to control them. This makes them expensive on the front side, and in danger of continued support as the product matures.
So, Who has a USB/Ethernet Controller that can report on the pros/cons of their experience? I like what Kroko has done with a #3 solution, and am considering it. I also like the kflop solution and like the idea of using a Mach3 plugin, but worry that my parallel port problems will continue. Warp9 and the smoothstepper seem to have a fairly large following, but appear to have some ongoing issues as well.
The first part of this post ends here with:
Please, owners/makers of various controllers Please share your experiences/thoughts about your hardware/firmware/software. I am going to buy something soon and would really like some input from people using the various products.
************************************************** ******
Part 2:
The goal: An open source discussion/development of a controller interface that is not timing dependent on the host PC. That is, step data can be sent to the controller all at once (buffered) or in large "chunks" via any available means (LPT, USB, Ethernet, Flash Drive etc, etc.) The purpose is to find a way to bridge the PC to the drivers that is platform independent.
I have several ongoing projects, so I am NOT planning on undertaking this any time soon, but I am hoping that an opening dialogue might stir some interest in an open source approach to this ever escalating problem (newer OSs not playing well with LPT ports, and the eventual demise of the LPT port in future MOBOs.)
The three approaches outlined above are the most obvious ways to remove the LPT port from CNC. If there are other approaches that deserve consideration, please share your thoughts. Of the three, the first is a non-starter because it does not address the demise of the LPT port, so that leaves us with PC based gcode "compilers" and external gcode interpreters (ie, the uController/FPGA resolves the gcode).
There is an open-source gcode interpreter project for the Arduino. I think the AVR family of uControllers is fabulous, but under-powered for such an undertaking. I also think that a firmware gcode interpreter could severely limit compatibility and would lend itself to highly proprietary purposes. It is certainly possible to overcome these limitations, but it does not offer a solution that lends itself to platform independence, it simply uses Ucontroller power to replace the PC.
I think Kroto's approach has the most merit for an open source approach. What is needed is a defined communication protocol between a PC and a controller that is independent of the controller. To envision how this might work we only need look at the origins of gcode. The original idea of gcode was to create a series of commands that a controller could resolve into pulse streams, but since gcode's inception PCs and electronics have made huge strides, and we now want features like acceleration, forward looking algorithms, etc, etc that simply aren't practical in firmware (not for general purposes). Abandoning gcode altogether is silly, there is far too much momentum in existing software.
To bridge the gap between gcode and controllers I am suggesting a standard protocol that includes timing along with step and direction information and includes feed back to the host PC about the actual position in the execution. Let's take a standard 4 axis driver like the G540. For this driver we need up to 4 bits each for step and direction, or one byte of data for any given step. Obviously we would also need timing information for each step. At the end of the day, we need to ensure that the data throughput does not exceed normal communication protocols, and that a reasonably priced uController/FPGA could handle the data throughput. The process might go something like this:
Obviously there needs to be "Modes" of operation that include moving 1 to Max axis within a given "setup instruction" data group. For instance, if A & Z are to remain "fixed" for the duration of the instruction, then only two axis of pulse streams are required, and a mode could be selected where 4 pulse strings per byte of data could be generated. Obviously for curves more data is requisite. There would also need to be support for more than 4 axis, but this is fairly trivial, it just increases the data throughput.Code:Set step timing to 10uS (one byte instruction) Set Dir to + on Y axis, + on X, - on Z, + on A (one byte instruction) Set Step Counter to 100 (one byte instruction) Set Pulse Train Counter to 4 (one byte instruction) Step Pulse train: 0001 0001 ; moves A two steps of 100 pulses (one byte pulse) 1001 0101; moves Y & A 100 steps then moves X & A 100 steps (one byte pulse) 1010 0010; moves Y & Z 100 steps then moves Z 100 steps (one byte pulse) 0001 0001 ; moves A two steps of 100 pulses (one byte pulse) Next would come a "setup instruction" that would change parameters as required.
These are just examples of a possible protocol for producing a relatively compact, non-time dependent data stream to a controller. From the controller's point-of-view it doesn't matter if it is getting the data Via LPT, USB, Etherner, Flash drive etc. Defining the protocol is the important part, this would allow developers of applications like Mach3 to offer a "compiler" (or atleast an output format that could be compiled using an open source compiler) using their existing software, and hardware/firmware developers to offer their products to the community w/o the community having to worry about compatibility issues.
I see no reason a compiled "protocol" would need to be more than 10x larger than its corresponding gcode file, and could actually be very close to its size and perhaps even smaller in some cases. "Tricks" to avoid overhead involved with acceleration might include an auto-incrementing pulse train mode, so that the time between pulses increases/decreases every data byte until some maximum/minimum is reached. Features such as "supported modes" can be hardware dependent and software selectable, allowing room for more advanced features and firmware designs.
Feedback to the host PC is not particularly time sensitive for graphic interfaces, and could be as simple as absolute pulse count from "zero" in all axis along with switch/relay/etc data every 100mS or so (this would be a fairly small amount of data).
I have droned on and on, apologies, but I think as a community we need to be working on this, that is defining a protocol/language whatever we want to call it that will free us from the vagaries of the LPT timing issues. If this protocol is defined in an open source//open forum then software developers and hardware/firmware developers can work things out on either end because we, the consumers, will insist on it. The way things stand right now a new solution HAS to either comply with existing hardware/software OR attempt to develop both simultaneously and then convince us that they will support it in the future. If WE define the protocol, then hardware folks can focus on what they do best, and software developers can focus on what they do best, and if someone on either sides goes away, we aren't left holding the bag.
I guess I really should have made this a separate post ;-)
Fish