FieldLink MR

Full-on, Mixed Reality Stakeout

Now, with a heads-up display visor and hand gestures—no controller needed—mixed reality construction layout just went from “sci-fi” to the job site.

Stakeout – Struggling and Juggling

Sorry, but I need to geek out a bit…. If you’ve ever done construction/survey staking then you know how cumbersome and time-consuming it can be. While there have been many advances in layout, especially when robotic total stations (RTS) were introduced into staking workflows, the process can still be painful. Juggling the hand controller, prism pole and bubble, keeping oriented to the RTS, and reconciling the plans/design model to the points you need to stake—this is a bit of an art that takes a lot of practice.

But what if you could just attach a full-on augmented/mixed reality visor to your hard hat and see the design model superimposed on your view of the site? You’d select stakeout points in the same view, navigate to them, and take the shots, completely hands free—and control all of this with some finger gestures. That very setup has finally been announced as a working product.

An MR Solution

On January 31, 2022, Trimble announced FieldLink MR, a new version of its field software for construction (that crews typically would use on a handheld controller or tablet). It has now been adapted to run on the XR10 wearable (see xyHt’s feature from 2019: “A New Mix”) that attaches to standard hard hats. FieldLink is a software that brings construction/BIM design models and data to field instruments like robotic total stations and GNSS rovers for layout and other related tasks. Standard FieldLink is similar in that manner to Trimble Access field software that surveyors use to control their instruments for field data collection and layout.

Trimble was one of the first developers of practical applications for Microsoft’s HoloLens 2 mixed reality (MR) wearable, which launched in 2019. This partnership with Microsoft lead to the XR10, and Trimble has been developing multiple applications for this system since. This wearable, if you are not familiar with the HoloLens 2, or XR10, has a visor on the front, with two transparent “head-up” display panels, one in front of each eye. You see the real-world through the visor, and the virtual features are projected up on to the small displays, so you see both worlds as one in the field of view. The device has a set of large processors that are on the back of the device. On the HoloLens 2 and XR10 a module with a set of large processors  and video cards is in the part of the device that sits on the back of the hard hat. Your eye movements are tracked by the device (and iris recognition can also be used for security); it can tell what part of the view you are focused on, like what part of the design model. The shape of your hand gets modeled, so it can better recognize the finger gestures you use to operate it. Read more about how the wearable works here.

Layout at the Snap of a Finger

Nathan Patton is the product manager for FieldLink MR. He has a degree in geomatics from the University of Calgary and surveyed in the field before joining Trimble a few years ago. His first posting within Trimble was with the mixed reality team, which lead to his role in developing this solution. Patton was also selected as one of xyHt’s 22 Young Geospatial Professionals to Watch in 2022.

Nathan Patton, product manager for FieldLink MR, demostrating hand gesture control.

“For standard stakeout, you first have to do a resection from control points to orient the design model. Then you select the point you want to stake in the controller, hold the rod, the total station takes a shot on the rod, and then you look on the controller to see how far left/right-fore/back you need to move to get the rod tip on the point,” says Patton. “We call this the ‘survey dance’; you keep shuffling around, while the instrument continuously provides updates and directions to get you to the point.” While a great leap forward from the days of calculating radial stakeouts from the plan set (and pulling a tape from the setup point prior to that) the current process can still be time-consuming, sometimes even as slow as five minutes per point. And it can take a long time to get the “compass in your butt,” the heuristics that experienced stakers develop, speeding up the process.

“We wanted to avoid the mistake of trying to improve a process by completely changing the task,” says Patton. “We didn’t feel the need to invent a completely new layout application; instead, we make the existing, familiar one more efficient and easier to operate through mixed reality.” Anyone who has used FieldLink, Access, or other layout apps would not find this much different, just with a different interface. 

He describes the new workflow with the XR10 and FieldLink MR: “The user can see the plan or design model in the XR10 view, overlaid on the site,  and can do any of the normal functions by looking at features or menus, like looking up the control points, selecting/storing with simple ‘air tap’ gestures.”

What users would see from their perspective, wearing the MR visor, during stakeout operations; selecting a point to stake (left) and a status/update menu (right).

I had some time with a HoloLens 2 when it was first released. It tracks where your eyes are looking and mastering the finger gestures only took a few minutes. The virtual elements, like the plan and menu, appear projected over the view of the real world via the two displays inside the visor, so there did not seem to be much of an issue with glare or external reflections getting in the way.

“The user will first jump in, and they’ll pick their files off a menu, pick their points, pick their models, and then they’ll jump into the actual mixed reality process,” says Patton. “And just like they would on the tablet, connect to your total station; through radio or through Wi-Fi, all that stuff is the exact same. And then the user would see, just as they would today, a layout of all of their control points, as well as the model—in this case, in full 3D overlaid on the site.

Just like they would on a controller or tablet, you pick control point one, and then the user will navigate over to where that control point is on the construction site. And they’ll take an observation with the total station; just doing a regular resection. And just like that, the system is oriented and the model and points are in 1:1 view all around.

Then the user can get selective about which points they are laying out. Say a user is doing HVAC hanger bolts today, and pipe hangers another day. They can narrow the list, and it will show a status if laid out yet or not. The user would first select Layout Mode and then pick a point and navigate to it (via the big light beam). Once close to this point, the interface automatically switches to show the fine-tune arrows to accurately position the prism pole. Patton notes that this differs from the old process in that the left/right-fore/back directions in the controller assume the user is looking towards the RTS. In the MR version, the users see arrows no matter which direction they are facing. Once on the point, the user taps a virtual button to store the point as staked.

Getting to Faster, Faster

How much more efficient could this be? Trimble isn’t ready to say an exact figure quite yet, but early testing is already showing significant time savings versus traditional tablet-based layout. But also, that it takes less time to master. In one test with construction crews, a crew member who had never done any layout before, was asked to stake some points with MR. Then Patton, who has had a lot of layout experience, did it with a tablet and old workflow. They matched times in a race to lay out 20 points, but with the FieldLink MR newbie pulling ahead on a second set of 20 points.

But how will folks take to this new type of interface? So many of us get so used to tactile keyboards and staring at that handy controller or tablet. But on the other hand (no pun intended) we are seeing more virtual, augmented, and mixed reality in so many other aspects of our daily lives, especially among the younger generations. AR and MR are otherwise already changing construction and surveying in other ways. There are AR solutions like SiteVision, that design and construction teams use to see, on a phone or tablet, the design oriented and scaled, superimposed over the camera view. Even in some staking workflows, with RTS that have cameras like the SX10, you see on the controller or tablet what the camera sees, with the model superimposed over it. That is augmented reality, just not immersive like with the XR10.

We might get jaded by all of the hype about “metaverses” and some of the goofy early VR/AR goggles and headsets (remember how Google Glass faded faster than it arrived and was almost a “punch magnet” if worn in public?). I have not tried this specific layout workflow with such a wearable, but I did enough test exercises with the HoloLens 2, and it got me excited about what could be done with it for surveying and construction in the future (more below).

FieldLink MR should hit the streets in a few more months. It will be great to hear about user experiences.

MR for Surveying?

Trimble is not saying anything yet about the possibilities, or any timeline, for integrating this solution into surveying workflows—the focus is construction at this time; though there was some discussion about potential applications for tunneling. But there really could be some intriguing possibilities for surveying and other applications.

FieldLink shares many core elements with Access, which is used with surveying instruments. And so do the respective construction and surveying instrument lines. There might be more stakeout being done by construction users, but many surveyors also do stakeout on a fairly regular basis. I’m also thinking about how much MR tech could benefit topographic surveying, or any fieldwork for that matter, with total stations, GNSS, or even scanners. Imagine:

  • Seeing the progress of the points/linework you are collecting in the view, the same as a user would on a controller or tablet, but hands-free without lugging around another piece of hardware.
  • Hopefully soon, prism poles could evolve to have calibration-free tilt compensation, like with GNSS rovers (e.g., R12i), so you save even more time, untethered from the bubble. Likely this would have to be a different technology as tilt compensation from GNSS/IMU; perhaps a combination of IMU, laser, RTS camera(s), and the cameras in the XR10. And speaking of GNSS, the live satellite positions could show up in the users view, like they can with some phone apps. It would be a nice add-on for the XR10.
  • Scanning could become more like “painting,” selecting features and regions to scan with hand gestures.
  • More input options, like voice. Often survey fieldwork involves text inputs, and while air tapping menus with field code lists can be great once you get used to it, mastering floating virtual keyboards can be frustrating.

Could MR wearables eliminate the need for controllers/tablets? Likely not ever completely but, depending on how useable they are and if they save time and hassle, they might someday for many users. Of course, the speed at which MR and wearables like the XR10 get integrated into surveying solutions could be a function of just how many surveyors request it, and how loudly.