Very slow workflow


I’m trying to cut a batch of small objects, each with two engraves and a cut. I can fit 14 of them into the printable area in the Glowforge sheet of Proofgrade material, but it’s a 12 MB SVG file (from an 8 MB .ai file) which won’t print - after a half hour or so, it fails.

If I print one object a time, they take 20 minutes, but the scan/process cycle is very slow, perhaps 15 minutes. If I print two at a time, they take 40 minutes, but the scan/process cycle is perhaps a half hour. Three sometimes works, sometimes fails.

If I could just run the job on the complete sheet, it should take about 4:40 to cut, after one button press. But instead I have to run seven batches of two, each of which takes over an hour. And have to turn off/on different objects between each batch, click print, wait half an hour for the Glowforge light to go on, hit the button, then wait 40 minutes for the job to complete, seven times. So it takes about twice as long as it should, and requires frequent intervention to keep the process going.

I’ve already decreased the resolution of the images - they’re 500kb and 250kb PNG’s, so not huge.

How can I make this a “one click per batch” process? Or is it a matter of GF getting their buffering working so they can process large jobs?


What’s the LPI? Take that down to a still-acceptable level (270 or 225 is usually fine). If that doesn’t work, then it’s an issue with the buffer. BTW, the 4:40 time is one that won’t go - there’s something in the neighborhood of a 3hr limit now.


Would simplifying the file help? what is that site OMGSVG?


I am using the “Proofgrade” 3D carve and deep graphic for the two bitmaps. I assumed that the LPI is whatever is optimal for clear acrylic, which is well over 300 lpi for both the graphic and the 3d carve. I’ll try dropping LPI down to 225 and see if that helps. What’s the relationship between LPI and power? If I cut LPI in half, I should double power or cut speed in half, to carve the same amount, right?


The SVG is clean - it’s just 28 circles (four control points, beziers) with and 28 greyscale bitmaps embedded, from Illustrator. The GF appears to have a limited buffer size that the engraves exceed. Lower LPI should help - 1/2 the LPI might equate to 1/4th of the data being sent to the GF, meaning that I can print more.


The math doesn’t work. How can a bunch of vectors and some modest .PNGs add up to 12MB? What resolution are the .PNGs? Is there some mysterious object in there taking up memory?


Glowforge has approx a 3 hour job buffer at the moment so any more than 8 or 9 pieces of this particular design is simply a no-go situation.

There are limitations to the complexity of the artwork.

28 bitmap images embedded into an SVG is likely to hang the GFUI. If it does go through, that’s going to generate 28 engrave operations, to which you will have to assign settings individually.

If you have bitmap images laid over on top of other bitmap images, those overlapped areas will be engraved twice because each bitmap is its own operation. If this is not how you want it to behave, you need to combine the individual bitmaps and reduce the number of them.


Each object is a 250k PNG and a 500k PNG, plus a few vectors. There are 14 of these packed into one panel. So that’s about 10 MB of data. SVG isn’t terribly compact - it’s a text format - so expanding to 12 MB isn’t shocking. What was a bit shocking that a 12 MB file is unprintably large for the GB. But since the file sent to the GF is much larger - it’s the low level instructions to the steppers and laser - so the high lines-per-inch of the default 3d carve and engrave generates far more data than the actual images.

So now I’m cutting the LPI to half, to see if it helps. It took a 22 minute job down to 16 minutes, which is good. And the time ‘processing’ was much faster, too. In 20 minutes I’ll know whether the result looks good.


Dropping down to 195 LPI visibly degraded both the 3d carve and the graphic engrave. Trying 270…


This is where the cloud processing should help. They should intelligently break down the design automagically for us and deliver it to the :glowforge: in the most efficient and effective way. It makes no sense that we have to do that ahead of time.


I am guessing that right now the firmware can only cut jobs that fit into its buffer all at one time. If so, they REALLY need to implement streaming into the buffer so that you can print much larger jobs, streaming the job to the printer as it goes. This is an extremely well understood problem - it’s how pretty much all printers, 2d or 3d or laser, work now.


Bummer. Remember the days when if a file download failed in mid stream, you had to start from the beginning? Yeah that sucked. So the smart ones starting noting which bytes you got and then started where you left off. I get it that they shouldn’t have to build a buffer infinitely large. The cloud can batch and send the job in chunks with a handshake to know when the device was ready to proceed.

Maybe this is in the works. I’m betting that it is. Wherever a solution exists to make the :glowforge: automagical, they seem to have at least considered it. They just have too many user stories to crank out in their current sprints so their parking lot is loaded up.


I am sure they know about and plan to implement streaming printing. It’s harder than what they’re doing now, so I can see why they deferred it. But it does mean that we have to do work to compensate for the limitation, running many small aligned jobs instead of one big one.

It might be my imagination, but ‘processing’ seems a bit faster now than yesterday?


This is where a simple array program would be nice,
I want this object repeated 7 across and 4 down. Then you are only sending one small file and the cloud can just repeat the print with offsets.


The ability to make multiples would be nice - the upload of one file, then telling GFUI to make 13 more copies and pack them into the sheet would be great - 3D printer slicers do that sort of thing well, and GF could copy that. Though in my current job’s case, each thing is unique (the bitmaps are unique snowflakes) so multiples wouldn’t help me right now, but it’d be awesome for banging out keychains at a Maker Faire.

Now that I’ve got the design such that I can print a sheet in 5 jobs, each of which print in just over an hour, the thing that’s slowing throughput is ‘processing’, which feels like it’s taking about as long as the actual lasering. As far as I can tell, I have to wait for a job to finish, then start the setup of the next job (turning things on and off in the GFUI to print the next part of the sheet), then send to the printer, wait for ‘preparing’ which takes ages, then hit the button.

Given that the Glowforge’s server runs on Google App Engine, I’m surprised that processing would be so slow - GAE scales extremely well, spinning up new servers as needed. Perhaps GF has configured it onto smaller servers to save money? Or perhaps the software just needs performance tuning?

And a minor feature request - when processing is done and the printer’s ready to start, it would be AWESOME if the GFUI (or the printer) made unique sound so that I could go hit the button. The computer and Glowforge are in separate rooms, and since the processing takes so long, I switch to other tasks in the foreground. Having to visually check the GFUI window means that I don’t immediately know when the printer is ready to start, and a ‘bing’ sound would speed my workflow.

Yeah, printing hundreds of these things is time consuming, and I’m trying to optimize the process.


After trying several resolutions, I’m sticking with the Proofgrade defaults - dropping LPI made the results visibly lower quality, and (to my surprise) didn’t really speed up the job very much - 1/2 the LPI took a single-object job from 20 minutes to 16 minutes. And in this case, I want the result to look amazing, so I’ll accept the 20% time tradeoff. If 1/2 the LPI printed in half the time, I would have been tempted.


One more suggestion is to orient the engraves to have them horizontal more than vertical. Since the he’d moves back and forth faster than up & down it pays to make engraves on larger jobs more horizontally oriented.

A box 10 inches wide by 3 inches tall will process faster than one that is 3 inches wide and 10 inches tall.

Multiple engraved objects can complicate this more - in traditional lasers the engraves tend to be all across the bed and then move up in the Y axis. The GF pathing operations do a decent job of breaking up separate blocks of engraving so it tends not to do so much full bed sweeps but if you have your engraved objects embedded into a larger object sometimes that overrules. Watch it as it’s engraving and see if it’s spending a fair amount of time moving but not lasing. That’s where you can pick up more speed improvements.


Good advice!

The bitmaps are horizontal or square. The GF software does a great job of printing each bitmap separately, optimizing the path between them. And it ignores whitespace nicely, so the head doesn’t move further than it has to. For example, one graphic is round, and at the top and bottom of the circle the head only moves enough to cover the area of the circle, not the much wider square that is the bitmap. Very cool!

Right now the main inefficiency I see is between jobs - processing takes forever. If I could do the setup and processing for the next job while the first one is cutting, that would save a lot of time. But when I tried running two browser windows both in I got an error, so since then I’ve been doing one thing at a time.


In my study of the history of computing, I learned that teletypes had transmission control commands to allow for chunks of data to be transmitted. They also had a control character (BEL) to ring a bell to get the teletype operator’s attention.

Dan has mentioned on numerous occasions that there a some deep bugs in the rendering pipeline, they know what and where those bugs are, they will be fixing them, and they will let us know when they get fixed.


Yep, these are not new issues. It’s really just a matter of the software maturing. So I’m not upset or anything. But since I have a LOT of time waiting, I have lots of time to think about how things could be improved so that I spent less time waiting. :slight_smile:

Certainly Glowforge should work on “printing properly” before “printing faster”.

I’d argue that being able to print jobs larger than the buffer (i.e. ‘streaming’) is a part of printing properly. Without that, they can’t print large or complex jobs at all. This is a hard problem (buffer management, dealing with communications issues, etc.) but it’s one that all other printers have solved one way or another. Admittedly the GF is streaming over the internet rather than USB, so it’s more of a challenge, but it’s one that thousands of devices have solved (e.g. streaming video and audio) so it’s one that can be solved with sufficient effort.

To queue one up in the hopper, users would get throughout way up if the ‘processing’ and ‘printing’ status windows were non-modal, so that they could be minimized and I could work on the next job - configuring the cuts, processing, etc., everything before actually sending the processed job to the Glowforge. I don’t know the GFUI is written internally, so I can’t say whether it’s easy or hard.