When? (Software readiness)

I know I might get some heat from the fans around here, but I think it is a reasonable question to ask…

When are we going to get the camera to be precise, and when are we getting custom material Library in the app?

Bonus: when will we get passthru and flip side support in the software?

I think these features are more important (at least for me) than the iOS app.


I don’t think you’ll get heat for it. Everyone wants them. But I also don’t think that you’re going to get an answer above or beyond anyone else who has asked the same question.


I’m with you about iOS. I wonder what the usage stats say about adoption rates? I doubt they’ll tell us but I’m curious about the types of users we have in Glowforge land.

1 Like

FWIW, I doubt they have the computer vision folks pulled aside to write up the iOS app. Just because we see progress on one front, doesn’t mean progress isn’t being made on another front. That’s the great thing about personnel - we all have different skill sets. :slight_smile:


Fair, fair.


…buuut, if you assume that GF has fairly limited resources (they are a startup. Well funded yes, but a startup.) then it’s arguable that the choice to spend resources on an iOS app means a opportunity cost in some other part of their development efforts. It’s possible that both arms of development operate relatively independently and are clipping along at full scheduled resource levels, but anything we might try to guess is exactly that… a guess.

So, my guess: they are not where they wanted to be UI-development wise because they are still struggling with making good on their promised features that turned out to be much harder than expected. Why else would they not have some of the flashier things (e.g. autofocus, passthrough, item recognition like macbooks) that they promised ready if they didn’t turn out to be a minefield to actually make? This has led to a pretty bad developer resource crunch, and thanks to the double-headed monster of limited financial resources (see above re: startup) and the mythical man month, they are – to put it delicately yet succintly – hosed.

And by proxy, to a smaller extent, so are we.

That being said, I love my glowforge, and it does what I want it to do. I wish the UI were more in line with my sensibilities, but I’m a software developer and we are annoyingly self-confident bunch when it comes to this stuff. “Everyone else’s code/UI is garbage” – every software developer ever.


I think that waiting all this time at least we deserve an estimate, even if it slips. It has been a while since they have an update post.

I am not complaining! I am happy customer, only want an estimate and know they are coming sooner rather than later and that they haven’t hit a wall in terms of hardware where some of this features will be impossible to deliver.

Answering these questions to any of us will answer them to all. Not answering the question leaves room for unnecessary speculation. If a feature is not coming, we rather know now.

If I am speculating more than I should, a simple answer from them will eliminate any room for speculation. No answering it will leave us speculating which is great for Apple but not so great for a startup like GW.

A majority of things that people are screaming for including those above are not UI development. Think there is one IOS person on the team from the bio descriptions. Stuff like using metric instead of inches, interface colors, page layout, settings, scaling, etc. Those are something a UI developer could easily do.


We’d think. But still speculation to an extent. I’d imagine that any ui devs need to work closely with any backend devs. I do t know anything about the architecture but unless they are using really strict internal apis, it wakes sense that those two teams are joined at the hip.

I remember the post about how iOS dev was independent from other portions of their dev efforts. Was the relationship of UI to backend ever discussed?

I really don’t think that’s the issue here. I’m speculating too, of course. But IMO there are a couple-three factors at play in the slow pace of Glowforge software updates:

  • Hiring good software engineers is harder than ever
  • Co-premise: Hiring more engineers often does not fix the problem
  • People assume projects can be managed by anyone as an aside to their main job, or worse, assume they don’t need project management at all, when in reality it’s a specialized job and finding good project managers is also hard
  • The Glowforge is overdesigned and overengineered. This is a system where someone decided that machine vision was a better method of homing than a limit switch, and nobody stopped them. It doesn’t surprise me if there’s a lot of technical baggage that drags down any new development.

It’s easy for me to criticize from a distance, of course. If I woke up in charge over there, I’d try to rally around a few of key principles from the Agile movement. Focus on simplicity, deliver on a regular cadence, and work with the customer (us) to create value.


Can we at least get the ability to input a MANUAL CALIBRATION ADJUSTMENT. This would be incredibly straight forward to code. It could literally be done in hours (and I’m being generous).

My GF is consistently off by exact same amount. It would save me so much time to just be able to enter a manual calibration offset.


Most units have a variable error across the material. So a fixed offset entry doesn’t work. Needs to be a mapping.


I think there is also a homing switch installed somewhere isn’t it ?

Any way… anyone from GlowForge care to comment on a timé estimate?

No homing or limit switch. Calibration is completely determined by the head moving under the lid camera and a fine alignment of the logo on top of the head.


Nicely put.

Do you have data on “most units”? I’d love to see that.

I, of course, can only speak anecdotally (as I don’t have access to data on “most units”), but in my case the unit is most definitely off by a fixed amount consistently.

Your incident of error is the same for say… top left, bottom left, top right, bottom right quadrants? And then the center is the same as all of those?

Wow. This is something… I would have added switches :blush:

Many tried. They chose to not listen.

1 Like

Yours might be an unusual case.

Ignoring the probable tone… You have read less than 2K posts. Some of us have read 285K and been here from the first deliveries. Many hundreds of bed images showing offsets. Very few if any with a simple fixed offset across the bed. I’ve only had three machines, all with variable offsets. If you don’t want information or a reasonable discussion then that’s OK with me.

1 Like

As an engineer, I’m pretty sure this is not the case. Related to the camera and warping issues there does need to be mapping transform, but for a consistent offset issue there is no transform that needs to be done.

That said, I’m curious how they calibrate the system in production. I’m guessing the wide angle lens / camera system is not highly consistent across machines which causes all of the variation we see.

I would lay down a known square grid with sufficient line spacing and image it thru the camera, then the data transform can be easily calculated. Since the error might not be consistent at different focus points, this might be repeated at different heights and then you can calculate and add in the height variable to correct for this distortion.

This is similar to how automotive HUDs are calibrated as the image moves up and down the windshield with varying degrees of distortion.

1 Like