Very slow workflow

Would it make a difference to the GFUI to flatten all your birmaps to a single arrayed image before placing in Illustrator?

1 Like

Just a warning… If you hit print before the previous job has done it’s final head dance and before the print button flashes once you will get stuck scanning forever. It’s not really ready for a new job until about 14 seconds after it says Ready. The Ready notice just tells you that it is OK to lift the lid.

3 Likes

No it isn’t. All they need to do is after three hours when the first buffer full is finished send another 100MB and start again (without waiting for the button press) instead of going into the cool down / print ended mode. There would just be a few seconds pause.

Communication issues are all handled by lower level protocols like TCP. The files are downloaded with https.

1 Like

Thanks, that’s helpful! I’ve had to power cycle once or twice to get out of that situation. Waiting 14 seconds is a lot faster!

1 Like

Well, it’s a little more involved - they probably don’t have enough storage to store the entire job, so they need to manage the data flow so that they pull down new data into space freed up from having processed earlier parts of the job, perhaps using something like a ring buffer with some flow control between the printer and server - they can’t send faster than the printer can cut (plus the buffer size). Not insurmountable - every serial or USB device deals with this sort of thing - but a little more complex than a TCP transfer. You could start with FTP or SCP so you get some integrity checking and ability to pause and resume a transfer, retransmit corrupted blocks, etc. All well understood stuff.

1 Like

In this case, one of the bitmaps is a 3D carve, and the other is a (graphic) engrave of a text label placed just below the carve. I tried combining the two (turning the ‘engrave’ into a light color in the 3D carve, and the resulting SVG file was smaller, but the engraving wasn’t any faster than doing them separately. I was a bit surprised, actually.

I could look into single-stroke fonts, which can be scored instead of engraved. It’s a little more complex to produce (my script that generates this would need to generate an SVG instead of a bitmap). But it might save a few minutes per object, which would add up.

2 Likes

I’ve worked with said teletypes. The flow control on them was the other direction. The computer could tell the teletype to start or stop sending.

This was because the teletype had a paper tape reader. The operator could load a tape containing data or a sequence of preprogrammed responses and the remote computer could enable the automatic start/stop of the reader.

There is a reason the ASCII codes 0-31 (0x00-0x1f) are called Control Codes, and why the Control key exists. Those codes were used with various kinds of equipment to feed paper, ring bells, perform flow control, etc.

1 Like

They have a 100MB ram buffer that they load the entire job into with an https download. Nothing to stop them simply loading the second part when the first part has finished. No need for flow fancy flow control because https has done that for them. You can download it with a web browser if you capture the filename.

So all it would need is a way to communicate is was an n part job and simple repeat what it does now without the cooldown and button press in between. They could make it seemless with a double buffering scheme but I don’t know if it is able to download while it is playing the puls file to the motors.

1 Like

Thanks for the clarification.

I think I should clarify my point. Sending streams in coordinated chunks has been a solved problem for decades. I’m sure we can go back to telegraphs and see the same problems and similar solutions to them. (Although humans took the place of CPUs back then.) I really don’t think this is where the problem lies. But nobody on this side knows what the problem really is, or what the constraints are, or what the internal priorities are, or how badly the original code was designed or written. GF knows this is an issue and they are going to fix it. All we can do is wait for the fix.

(Opinion alert. My guess is that since they are renting cycles and space on cloud stuff, and they do not charge a monthly subscription to cover that cost, that they are trying to pay the least amount possible for the resources. The problem is that they may have written code to do convoluted stuff to minimize cycles and space usage. The saying ‘Premature optimization is the root of all evil.’ comes to mind. For example maybe they don’t want to pay for storing long jobs for 3 hours on the server in order to easily spool. They’ve got multiple bugs here, they may be interconnected, and they may simply want to rewrite the entire pipeline code. I don’t know of course, this is simply speculation.)
.

5 Likes

How could they have decided that 100MB was sufficient when it so clearly isn’t? Whether it’s a hardware limitation or a number pulled out of thin air, whoever is most responsible for that decision needs to get an extended mennacing stare from Dan (possibly through a mirror).

Winner! Winner! Chicken Dinner!

Sitting idle after initial calibration:

95.7MB of free RAM @ idle*.

There is about 2.5GB of space on the 4GB FLASH that they could temporarily store some programming on.
But it would still need to pause and load the next part into RAM. I doubt they can read from the FLASH fast/consistently enough to drive the steppers in real time.

  • EDIT: This is with an active console session, so the usable RAM without a nosy user may be slightly higher…
8 Likes

Nice to have that view into the OS. Given that it’s Linux and has plenty of filesystem space, I have no idea why they’re not pulling the whole job down to a file, then spooling from there to the stepper motors. The only ‘interesting’ part is that they’d want to send data to the motors in sections such that at the end of the section the head is stopped, so that if there’s a delay while the next section is loading there’s no impact other than the delay. If a delay hit while the head was in motion, there could be skipping, etc., due to inertia from the head’s mass. That’d likely be easier to do on the server side, where they are doing motion planning, etc., rather than in the Glowforge, which is just playing back the low level stepper motor instructions.

2 Likes

Thanks for being nosy!

If 100MB is 3 hours, that’d require a throughput of something like ~76Kbps, right? Even dialup could handle that. … Oh wait… not quite, but close.

2 Likes

Yeah, modifying my designs, and then spending hours clicking in the GFUI, to work around this buffering limit, is getting old.

4 Likes

Would you believe that my first job as a product manager was managing the AT&T Teletype product line? LOL

3 Likes

Yes 10kB / s of uncompressed waveforms. I think that is the most bloated way to represent a 2D motion plan imaginable.

My own 3D printer tool chain I calculate the motion plan in Python on my PC, or an RPI and send it over Ethernet with UDP. So it could easily run in the cloud with a TCP stream.

I represent each line segment with a binary packet that has an end point in stepper motor units, the speed in hardware timer units and the acceleration. My machine firmware then uses a tiny Bresenham loop to drive the motors running on a 16 bit micro with no OS, just a tiny home brew IP stack.

My binary representation is smaller than G-code and probably smaller than an SVG representation of the path.

I don’t see the point of sending waveforms at a fixed sample rate when you can represent a path in a much more compact way and very easily expand that definition into stepper waveforms. Obviously I would need more data for laser PWM values when engraving an image but even that would be more compact than PWM represented by a bit stream.

So really I think GF have chosen a crazy software architecture and that is causing a lot of the problems. Converting a fairly compact SVG representation into the most bloated representation possible and then sending it to reasonably powerful MCU that does no further processing at all. It could easily convert a line segment definition into waveforms itself. You can also split the motion plan at any point where it is momentarily stationary with the laser off as the machine could pause without ill effect.

3 Likes

Doing all the work on the server might be a brilliant idea - it means that users don’t have to install software, manage updates, etc., which is great for non-technical users, and GF could potentially do things on huge clusters of servers that are faster than a desktop. So the general idea might be good. But they really need to make the process much, much faster, and they need to be able to handle files over 100k - because right now, those limitations are pretty painful. If they start ‘processing’ 10x faster, and stream larger files, I’d be a very happy camper.

Of course, encoding the ‘waveforms’ more compactly would be a great start. It’s still not obvious to me that sending waveforms makes more sense than sending gcode, given that the GF is running a real OS and has plenty of storage and RAM (compared to most laser cutters or 3d printers, certainly).

2 Likes

Thanks for posting about this and for the feedback and suggestions. I’ve passed them on to the rest of the team.

@mpipes is correct. Thank you to everyone else who gave suggestions as well.