Specification reduction

:laughing::laughing::laughing:

2 Likes

We just respond as mind numb zombies here to our overlord :rofl:

…it’s Santa’s sleigh

Never thought anyone would’ve guessed on their first try! We managed to find it in a surface mount component to minimize size & weight but we’re paying a premium for that.

12 Likes

You can always tell when I’m doing forum replies on my phone. :slight_smile:

8 Likes

Well that explains why glowforge tries to discourage us from venting out the chimney.

17 Likes

Although he is less forthcoming today, Dan stated here: Will the head camera be used to help with XY registration? that the head camera was used for autofocus. That is why I think it shines the laser spot at an angle to the camera, looks where the spot appears on the masking by averaging to sub pixel resolution and then triangulates to find the distance to material.

3 Likes

Hmm, I wonder how we’ll figure out that they’ve changed their mind on firmware open source since it isn’t in the list to begin with.

1 Like

You are arguing with yourself at this point. Said in my original and every other post that I don’t know. And still don’t. I have never considered it a sign of weakness to admit I don’t know.

This started with a comment about doing real time continuous focus using the cloud. It just surprised me that you even entertained the idea that might be tried by competent engineers.

3 Likes

If you fall and no one sees it, did you really fall?

5 Likes

Yes I don’t think it can be done in the cloud practically, so it would have to be done locally. If the controller has enough grunt to do real time analysis of video then the argument that the cloud allows them to use a cheap controller doesn’t really stand. It would easily be able to do all the other things the cloud does because realtime continuous autofocus looks like the most demanding task by far.

Double sided cutting is difficult if the walls of the material are not exactly vertical because then you need to work out the alignment of the bottom face while looking at the bottom edges from the side because you can’t see them from above.

These two tasks look the hardest to implement and just happen to be the ones dropped from the specification to “clean it up”. Good to hear they are still planned to be implemented. Odd to remove them if they are fully confident they will succeed.

Regarding the head apertures: we know there must be a red laser, a camera and a UV light source. If one port is the camera and the other port has two devices my guess is a laser diode and a UV LED. If it is an integrated laser distance sensor with 0.1mm accuracy I will be amazed.

2 Likes

I believe the bruising says “yes.”

3 Likes

Actually thinking about it some more: the laser spot will only move side to side when the height changes. That means you only need to look at a thin strip of video. Using the fact that it will generally be close to where it was last frame you only need to look at a few pixels on most frames. So actually it wouldn’t need very much processing time to track the spot in real time and compute the depth. I don’t know why it has been kicked into the long grass.

1 Like

I think it makes sense to clean up the feature list for what is available now. That way, customers shopping for the laser now don’t wonder why it can’t do those things when they get their laser in November.

It also may not need to go to the cloud for processing since it’ll need to make small adjustments in real-time to the job that already exists in RAM. So possibly what little brains the unit has in its head could be used for that purpose.

Possibly, depends if the head camera is linked to the main MCU or one in the head. But yes it certainly can’t go to the cloud and back in real time.

Depends on your cloud :wink: With 50-100 msec ping times it could be done with a really thin software stack.

That would be nowhere near fast enough if you consider have far the head moves in that amount of time.

1 Like

So at a top speed of about 80 mm/sec, 100 msec would be 8mm, 16 would be about 1.5. Good enough for warp, but not for carved surfaces. (And yes, I’m ignoring the time for the mechanics to move the lens, because you’d be stuck with that anyway. And I’m ignoring the faster movement for engraving because if the previous line isn’t good enough you’re already up the creek.)

Which raises another question: what are the offsets among camera, visible laser dot and IR aim point? Because that’s going to limit the resolution of anything you can do live. (Also with live work, do this mean while the laser is firing, because then the signal processing gets interesting at best, unless the camera can just switch to seeing where the burn point is.)

3 Likes

Good point. As the lens is between the red laser and the camera I don’t think it will be able to see the red laser when it is burning stuff because of the glare and smoke.

Perhaps it will have to do two passes over everything. In which case it could send video to the cloud but it would slow down the job a lot.

I have a toaster with a live video feed. I would really like to get the same from my Laser 3D Printer.

1 Like