Good point @fablab_elpaso. Unfortunately it’s easy to toast your Glowforge when modifying firmware, so we can’t replace parts if that happens. (We toast parts all the time when we’re developing!)
Now why can’t you post those videos. I bet they are full of excitement?
Ha! OK, duly noted. If I catch a video of something going terribly awry I will share.
It’s usually at 2am when the cameras aren’t running, though.
@dan Is this something that would be hosted on github or the likes? Is the current firmware forked from something else? i.e. GRBL, TinyG or Smoothie? Or completely home grown? Lastly, you mention user flashable- Would that be via a special app? or thru something common like the Arduino IDE? And I assume they can be done via the USB port? not a ISP?
Sorry for all the technical questions. Kinda a nerdy that way.
Not at all. We haven’t decided on hosting or firmware flashing procedure yet. The device doesn’t handle G-code, that’s done in the cloud, so there’s no equivalent of TinyG. Our priority right now is getting the experience great so we won’t have firmware release details for a while.
Okay- That’s the statement that fascinates me the most. Typically CAM generates the gcode. Then the motion controller handles the cords. and does the low level stuff by instructing the steppers via step/dir.
I can understand the the CAM part is done in the cloud- But what instructional set is the onboard motion controller is using? The follow up question is what limitations where you hitting that made you guys ditch standard gcode?
Bingo. We do both CAM and motion planning in the cloud. We just send the machine the actual waveforms to be fed to the motors (X, Y, each fan, focus, etc), laser, chiller, etc. There’s a bit of decompression and then the uP sprays the results at the right I/O lines.
Not being constrained by g-code opens some pretty cool doors in terms of what’s possible… much of that will be behind the scenes, though.
Interesting. So CAM & Motion Planning is done in the cloud. And the GF Controller receives a multi-channel waveform to control all the things? So- There is nothing on board that does the traditional step/dir generation onboard? (i.e. Mesa or PRU)… Just feeding in the actual pulse…that has been post processed remotely?
That basically means you have have something that as epic as a mesa- (or better) in the ‘cloud’ to drive all your machines.
Please tell me at some point you guys are going to talk more about that in detail. Because that’s pretty eff’ing awesome. Seriously. I’d have to say that’s the most understated feature of the GF (from an engineering aspect)
Funny Glowforge genesis story: I was going back and forth with Mark, our CTO, between closed-loop (servos, which I was advocating) and open-loop (steppers). He made a better case for steppers. Finally I said, “Well hell, if you want to go open loop, you don’t even need local control at all. Just put the damn motion controller in the server.” I didn’t actually realize what I’d suggested until Mark stared at me like I was from Mars for a while and it dawned on both of us just how powerful that could be.
I miss getting to work on the actual design… now I have to be all CEO-y. Catching up with you all is one of the highlights of my day. : )
With it running open loop, @dan, are there any concerns with the motors missing a step which would result in a shift in the rest of the print (e.g. an engraving that is 8.5" x 11" and half-way through, something causes the motor to miss a step)? I’m thinking if a motor missed a step, you’d have a bigger problem on your hand as that would mean something crashed, got in the way, or failed. From what I’ve seen posted from your team on here it looks like it’s not a big deal. I’m just bringing along the “baggage” from a reprap design that had undersized stepper motors running open loop and seeing a shift in my print. When I get this thing going on making those living hinges, I wouldn’t want the laser to be missing steps! Thanks a ton!
CEO (Chief Emotional Officer)
Don’t cry, it’s ok…
Yes - our solution to this is to not miss steps. : ) We had tons of bugs on this a while ago and beat them all out with heavy objects.
Missed steps are more of an issue with laser cutters than other gantry bots because soot tends to accumulate on the rails. The wheels need to ride pretty tight on the rails to avoid any wobble in the effector, so they get stuck really easily if there is any build up. Easy solution though: Keep your rails clean with a little soap + water. Don’t use any chlorinated cleaning solutions.
One of my biggest issues with g-code (at least with regard to modern 3D printers) is the printer’s dumb assumption that everything is as it should be, and its inability to respond to any sort of error. For example, you can end up with a 12-hour print which starts to fail on hour 2 (sometimes knocking the print over entirely), yet the printer continues to step through the G-Code extruding a rats nest of plastic for the next 10 hours. Presumably with all of the onboard cameras, image processing in the cloud, and the above mentioned motion processing occurring in the cloud as well, the glowforge could self detect issues during the cut and either attempt correct them in real-time, or at least fail gracefully. Is this something you guys are doing? It seems like an obvious next step for 3D printers as they mature towards viable home manufacturing for the masses, but I have yet to see anybody even attempt it.
Generally very cool stuff though – we’re all a big fan of standards like g-code I’m sure, but I’m always happy to see them thrown out the window if it otherwise lets you take a big ol’ leap forward.
The big issue with that is… how does the computer know when something went wrong?
For us… trivial to recognize. But programming a computer to analyze a bundle of pixels and identify “a problem” with printing is quite a bit harder.
Agreed — it’s the feedback that’s missing. One major issue with 3D printing (filament-based) is what happens when the filament breaks or runs out during a print.
There are a couple of manufacturers that have built-in break detection, plus a number of aftermarket/DIY hacks, but they are of limited utility — there are so many things that can go wrong that it makes it quite difficult to accommodate them in advance.
First Law of Cybernetic Entomology: There’s always one more bug.
We already have 3D scanners on the market which use a combination of lasers and optical sensors that generate a 3D rendering of the object in virtual space. There’s no reason a 3d printer couldn’t have a similar built-in capability to perform a regular delta between the object being physically extruded and the source model (taking into account regular layer extrusion). It wouldn’t necessarily be exact, but would know if the supports completely fell away, the printer stopped extruding, a stepper motor got jammed, the print started peeling, etc (most of the issues associated w/ 3D printers). That said, there would be a ton of processing required to convert the dot cloud into a 3D model, and perform regular fuzzy deltas against the source model. However, the Glowforge approach of everything in the cloud would make this totally viable.
Since the Glowforge laser cutter functions in 2 dimensions instead of 3, the image processing would be significantly less complex. The cameras are already there, the horsepower in the cloud is already there… seems like this would be entirely doable. Crazy complicated – but doable.
Just a thought. Sorry for the topic change, but the opportunities this approach (cloud processing) takes opens up a lot of interesting possibilities.
Definitely a big item in the feature hopper: detecting issues during printing and reacting properly.