What happened with the engraving update?

A few prerelease customers reported problems with engraving Monday. We tracked down the problem today and reverted it. In short, some really cool software that’s half-done got turned on before it was ready. It allows for extremely low power operation. Since it redefines “power 1” as much lower than before, it breaks our Proofgrade settings, among other things. We rolled it back today. My sincere apologies to the prerelease customers affected!

Expect to see it again in the future, but this time with more warning. :slight_smile:

42 Likes

“Is that a bug?”
“No. It’s a feature.”
:wink:

I’m sure the PRUs understand things are being tweaked quite frequently. No doubt the engineers’ll get it dialed in soon enough!

9 Likes

I thought it was great news but did wonder why you hadn’t rolled out new Proofgrade settings with it. From the look of it we will have loads of control over the power settings and that is awesome!

Did the current PRU users need a heads up? Absolutely, I would have been just as lost as some of them were for a bit too. But that’s what these Pre-Release Units are for and I’m really enjoying the ability to update via the cloud.

3 Likes

Sorry to see the lower power engraves go. But understand that the automatic Proofgrade capabilities take precedent for now.

4 Likes

Mmmpfffhhhhh! Bummer! :neutral_face:

Okay, when it’s ready.

3 Likes

We set expectations that stuff’s just going to change, and sometimes break… but for something like this we’d normally give a heads up anyway! Don’t want to cause unnecessary confusion & chaos.

15 Likes

I know I speak for well every one, we really appreciate the follow up and acknowledgment. these tiny things in regard to defect show a different type of engagement separate from general community engagement that you do and helps to show us what we can expect in production support etc…

that’s my ramble I hope it makes sense, back to raiding mythic nighthold

thank you

8 Likes

oh no! I was excited for it! I didnt get to try it on foam yet. I hope it makes its way back out soon!

6 Likes

This is all fine well and good for me. But, gees…after all the convoluted chatter I started last night about manual engrave settings :laughing: No matter though, I still learned a lot and it won’t be for naught.

6 Likes

I don’t know… I like a little chaos once in a while…keeps things interesting. :smile:

4 Likes

Nice glimpse of the potential of software pushes on existing hardware. Really amazing.

5 Likes

This makes me sad it’s gone. Really curious to see how it works with foam.

I just thought to myself: “Those people experiencing that bug are so lucky…”

Made myself laugh.

(envyyyyyyyyy)

11 Likes

@Dan, one thing that has infuriated folks in the 3D printing world is when they are using a tethered PC and MS decided right in the middle of a multiday job to run a windows update requiring an auto reboot, and their 72 hour job stops at 70 hours or whatever. Is there (going to be) protection against mid-job updates? I can imagine working on some expensive acrylic or exotic wood and have the piece ruined mid job with a “reboot”.

17 Likes

The last ‘update’ happen during a op. The machine kept going with no issues. But the UI thought the unit was offline. Even after browser refreshes. The second it got done. The unit did its homing dance and everything was normal. The only issue with that is you can’t kill the op if something goes south from the UI. Obviously if you open the lid on the machine it locally kills the job.

12 Likes

Obviously not an official response… but looking at things from a computation standpoint:

When you send in your 72 hour job, it is processed by the cloud computers before you ever press the Start button. That processing turns it into a waveform which will control your machine.

If, while your 72 hour job is running, Glowforge attempts to change things, they are changing either the Cloud side, or the Machine side.

  1. Cloud side changes: The change would be in how the computation happens, which is already done for yours. So this has no bearing on your job. It is possible there is a change to communication protocols, that would have an effect on your job… but I would expect that such a change would be exceedingly rare (especially since it would have to go hand in hand with a firmware adjustment)

  2. Machine side changes: The changes would be to Firmware, as that is the only thing on the machine. Here you could have issues, but only if they set up the update to ignore all current functions, which would be a very odd case. It should instead be obvious to the system that the machine is currently communicating with the cloud, and it waits to download, or at least that the machine is in use, and it waits to flash. But… here it would be possible things mess up. But… firmware adjustments are hopefully few and far between.

4 Likes

Since the spooling is specifically from the server (I hate the term cloud - you have a session with some server somewhere) when they redeploy their app to the application-server (I don’t know which one they use, but they all pretty much work the same way), the session may or more likely not if not database backed be lost. Which makes total sense that when the server software is redeployed, anything in RAM on the server is lost, and so what steps have been sent would be lost (now they could database back every packet, but geez that would be expensive performance wise). The server based software I develop works exactly the same way, so we “quiet” the system and so everyone is eventually booted off, locked out, until the deploy happens, which then requires users to re-login as we have to have a deterministic state (anything else means the client and server would be out of sync - which is different than session healing, where if your network drops we can heal the session connection as we know both ends haven’t changed)

1 Like

Meh, I am terminology fluid. I realized how silly I was to hold on to jargon when I first ranted about someone calling a computer “program” an “app.”

I suppose server side does have quite a few options for how it would function. But my assumption is that the output is fully processed and then stored as a data file somewhere. At that point, communication would be offloaded to a different system than where processing happens. This is the step which isolates you from any updates to processing, and where you likely move to a session with a completely different server.

1 Like

That just changes which application can kill you (the spooler rather than the slicer) if it gets updated. It’s also complicated to move files around between applications like that in a “cloud” environment, as you have to make sure they are all in the same instance (without some complicated shenanigans - which i do deal with in our application, but it’s a lot of extra work).

1 Like

The architecture is designed so that “updates” happen when the machine is not in use. On the cloud/server side, we use Docker & Kubernetes, so we can actually create new server images with the update and switch clients over when they’re idle (although I’m not sure how much of this is implemented right now).

9 Likes