Does print time vary depending on cloud load?

Does the time it takes to print vary depending on how many people are using GFUI?

I can see that time to prepare your design might vary… right? But does print time or the file you get back vary?

When the cloud prepares a file, does it store it locally or in your GFUI at all? For example, if I printed something once… and then I made 100 more prints of it… does GFUI have to do all that lifting each time I come back to the interface? It would seem if there are load issues (weekends maybe) then if the file was stored and didn’t have to go through cloud processing each time that would be a good thing for everyone?

2 Likes

No.

With server load, it could be impacted. But, the actual run time/print time/motion plan, won’t be any longer.

The motion plan is downloaded to storage on the Glowforge and then ran locally. I don’t know if the motion plan is deleted immediately after processing (@scott.wiederhold probably has knowledge of this) but, the unit won’t reference back and run the motion plan again.

From a production standpoint, I think it would be great if it did keep a local copy of that file (until the next file overwrites it, since storage is limited), so one could load new material and just run the job again and again.

If anything in the UI moved, then it could request another motion plan (since it would obviously need one for all of the new coordinates).

6 Likes

The job is compiled as a one-time use file that is temporarily stored on a Google cloud file share. This file has an expiration timer (I don’t know the exact value, but trying to download it after about 10 minutes or so fails.)

This appears to occur each time you ‘print’ the job, whether or not you’ve made any changes. So, yes, it does re-invent the wheel each time, so to speak. But, I think it would be more challenging to cache all these jobs as opposed to just processing it again.

After processing, the cloud sends a print command to the device that contains the link to file. The device then downloads the file locally, as does your browser (its used to render the animated preview). You can download that file too, if you know how to get the link.

As near as I can tell, the device keeps the contents of that file in memory only as long as it takes to run the job, and does not write it to any persistent storage.

10 Likes

Repeating jobs has been requested before.

1 Like

As far as I can tell, every time a job is ‘printed’ it is prepared again, even if it’s an identical job being re-printed. I’d hope that once GF has all of the ‘functionality’ items resolved, they’ll move on to ‘make it faster’ stuff like this, as it’d save a LOT of time not re-preparing repeat jobs.

As others have posted, re-running the last job locally in the GF would be awesome.

4 Likes

The motion file has the Z motor steps to focus the laser at the height determined by the red dot scan (if you haven’t overridden it), so it is actually specific to the material placed in the machine. That is one reason why it regenerates it each time even if the design has not changed. It also is specific to where the print is positioned.

Sounds like it would be more efficient to operate with higher level instructions than stepper motor ‘wave’ files.

Yes it is the least space efficient representation that I can imagine. A lot of the bytes are simply do nothing for 100uS. I compressed one 27 times with 7zip, so not much entropy.

2 Likes

Sounds like at the very least they could save tons of bandwidth by compressing the file before sending. And they could store compressed in the buffer and decompress to stream to the motors, which would in effect make the buffer 7x larger, which would be an amazing improvement. For me, anyway - I have to chop large jobs up into small ones…

Yes, very easy fix for the buffer problem but every time I suggest something like this is quick and easy to do I get jumped on by the people who think it is reasonable to take years.

2 Likes

If I hadn’t done a lot of communications code over the years, I suppose I might think it takes years, too! :slight_smile:

I’ve been in software development for more than 30 years. I’ve found that no feature takes a lot of time generally. It’s the aggregation of features that takes years.

It’s easy to look from outside and see all that’s not done and know that any of those individual features should be quick and easy to implement. What we don’t acknowledge are all the ones seen & unseen that are done and took time. Any set of features developed means there is another set that won’t have been done yet.

There’s a lot of confirmation bias here by folks who know they could have done this far quicker because the features that aren’t done are pretty easy. Yet those are the ones we typically prioritize for last because they don’t present the same level of risk that they’ll be a hairy problem that will derail things. And if they’re going Agile then those also tend to be at the bottom of the backlog stack.

Personally I’m impressed by the guys who developed LaserWeb and Whisperer for the Cheap Chinese Laser community. They’ve done a good job of seeing an opportunity to do things differently and actually do it.

2 Likes

Yes, I manage teams of developers, and am very familiar with managing competing priorities, and how a large backlog can take a very long time to work through. In this case, @palmercr and I are agreeing that there’s a high-impact bug in the current GF system (the fixed-size buffer), limiting the GF to only small cut jobs. that’s got a simple, clean improvement that is much easier and safer to implement than what GF has talked about doing to address the bug, and we’re hoping the quick, easy fix gets implemented sooner rather than later.

That’s where there is not likely consensus. You’re making 2 assumptions - first that it’s high-impact and second that as a result only “small” jobs are supported.

Based on the thousands of things we’ve seen done vs the number of “I can’t get this job to run”, it seems to be 10% case vs a 90% one and thus wouldn’t make my list of high-impact. Similarly, 3.5 hours is a long time to babysit the laser while a job runs. That seems pretty large - and considering the number of multi-thousand node or complex engraves I’ve done & seen here, I’d dispute the characterization about limiting to small jobs.

So two sets of reasonable people come up with diametrically opposed assessments of a feature’s criticality. The Product Owner typically decides and that’s someone else entirely :smiling_face:

1 Like

The problem is extremely well understood - any job that generates a “wave file” larger than the buffer in the GF fails, and the buffer is fairly small, so you cannot engrave anywhere near a full sheet of material. There’s a work-around, which is breaking a large job into many small jobs, running them one at a time on the same material, which requires lots of additional work both in design and when running the laser. I’ve spent days of wasted time due to this bug. There was a lot of complaining about mysterious job failures until people like @palmercr figured out the details, after which people are not posting as many repeat complaints - you tend to see each new owner post about the problem, then someone explains the work-around. But having a tedious work-around still means that many people are wasting a lot of time doing unnecessary work to get work done despite the bug. Perhaps you didn’t make the connection, but the long discussions about how to align engraves and cuts between jobs, which have been going on for months, is driven by the GF being unable to perform large jobs. This issue comes up weekly…

GF has acknowledged the bug. The solution they discussed was streaming data to the GF, which is more complex to implement. @palmercr and my proposal of applying standard compression to the data to in effect make the buffer much larger is very easy to implement, and would dramatically expand the GF’S ability to do large engraves.

Probably because I only have one eyebrow. Kinda hard to think good. I was thinking my 24,000 node Sunstone vector was large but maybe you were meaning other complicated projects.

My point is your evaluation of a problem is not the only possible one and GF has chosen differently. Just because you would prioritize the product backlog differently than they do doesn’t make them wrong. But I can see how you’re getting there from your point of view.

128MB is massive for a 2D laser. Most 3D prints are an order of magnitude smaller when expressed in gcode, which is very space inefficient. GF have invented the worlds most inefficient representation of a 2D laser job. It is so inefficient it has 8 ways of representing do nothing for 100us.

1 Like

Probably an inexperienced dev issue - I’d expect there are a stack of those things that @Dan has in the product backlog to be addressed before moving the s/w from beta. It’s the kind of thing easy to find (TFS scan or whatever source control app they use) & fix as you tweak up a product for handover from the pdev team to the maintenance team.

Seems like a run length encoding scheme would work well here…

Yes it would but Zip compresses the GOGM 27 times, which would be good enough for any job. Replacing step and direction with step+ and step - would make it compress even more as all the do nothings would be 0 instead of one of 8 possibilities.

There are off the shelf open source zip libraries so it is easy to add one at each end. I would expect a competent programmer to complete it in a day.