If you work in mining, you’ve probably seen the flagship case studies: vast greenfield operations running fleets of factory-integrated autonomous trucks, backed by multi-year, multi-million-dollar programmes and tight partnerships with a single OEM. Those projects are real and impressive. But for many operators with mixed fleets, older machines, contractor equipment and constrained capital, that version of autonomy feels distant from day-to-day reality.
A different path is emerging - one that doesn’t require deep access to CAN-bus networks, doesn’t demand a uniform fleet and doesn’t take months of integration before anything moves. Retrofit driving robots, combined with open autonomy software and interoperability standards like ISO 23725, are turning autonomy from a bespoke luxury into something far more accessible.
The limits of “CAN-Bus first” autonomy
Traditional mining autonomy assumes you can talk directly to a vehicle’s internal nervous system. The autonomous system sends messages into the CAN-bus or equivalent, commanding the truck to steer, brake, accelerate or dump. On a clean, standardised fleet with strong OEM support, that works well.
Most sites don’t look like that. They run fleets assembled over decades, with multiple brands and generations. Some machines have limited or no accessible CAN interfaces. Others are leased or owned by contractors, with little appetite for permanent modifications or deep OEM integrations. Each new model or variant can require its own engineering effort, testing and documentation.
This “CAN-bus or nothing” mindset quietly excludes a huge proportion of the world’s mining equipment from serious autonomy conversations. The result is familiar: autonomy is something to consider for the next flagship project or after a full fleet renewal, rather than a tool that can be applied to the assets already on site.
Driving robots: Autonomy that drives like a human
Driving robots offer a different approach. Instead of trying to integrate with each vehicle’s control network, they operate the machine the way a skilled human does - using actuators, sensors and software instead of arms and legs.
Robotic actuators handle every driving function: steering, throttle, brake, clutch, gear selectors, park brakes as well as auxiliary controls for tipper operation and other non-powertrain functions. Sensors report steering angle, pedal position and vehicle dynamics back to the autonomy system.
From the machine’s perspective, not much has changed: someone is still driving it. The “someone” just happens to be a safety certified robot, interfacing mechanically with a safety certified underlying vehicle. That simple shift unlocks a lot. If a human can drive a particular truck or water cart, a robot can usually be installed to drive it too, regardless of OEM or model year. Legacy haul trucks without useful CAN access, contractor-owned support vehicles and mixed fleets of different brands all become candidates for automation. The robot becomes a universal interface between the physical machine and the autonomy software, smoothing out the messy diversity of real-world fleets - while retaining all the in-situ performance systems of the vehicle such as traction control and ABS braking.
From months to hours
One of the most tangible impacts of this approach is deployment speed. Deep CAN-bus integrations usually involve long lead times: negotiating interfaces, developing embedded code, running bench tests, validating on site and documenting every pathway. Weeks or months can pass before a meaningful autonomous haul takes place.
With retrofit robots, the sequence is simpler. Hardware is installed on the vehicle - actuators, brackets, a control cabinet and sensor mounts. The system is calibrated so that the robot understands how much to turn the wheel or press the pedal to achieve a given response. The autonomy stack connects via a relatively small set of commands that tell the robot what speed to target and which path to follow.
Once that is done, the first autonomous missions can often be run in a controlled part of the site within hours of power-up on familiar platforms. Mines can see a truck running a basic route, a water cart following a fixed circuit or a shuttle between dump and stockpile happening autonomously long before a traditional integration project would typically reach the same point. That speed encourages experimentation. Operators can try autonomy on a single route, vehicle or task, gather data, learn what works and then decide where to expand.
A layered, vendor autonomy stack
The robot is only one layer of the autonomy stack. Above the vehicle control layer sit the perception systems - LIDAR, radar, cameras, GNSS/IMU - that sense the world, and the autonomy software that plans paths, avoids obstacles and controls speed. Above that again are the fleet and business systems that assign tasks, track tonnes, manage queues and connect into planning and reporting tools.
For autonomy to be truly democratised, it isn’t enough to be OEM-agnostic at the hardware level. These layers need to talk to each other in consistent, open ways so mines are not locked into a single supplier at every level of the stack. It helps to think of three layers working together: a vehicle control layer where robots or OEM drive-by-wire interfaces tell the machine what to do; an autonomy layer that decides where to go and how to get there safely; and a fleet and business layer that manages dispatch, production and planning.
If each layer can speak a shared language, mines can mix and match components, upgrade individual layers over time and introduce new vendors without starting from scratch. Driving robots help by standardising the bottom layer: one consistent interface to a very inconsistent fleet.
ISO 23725 and interoperability
This is where interoperability standards such as ISO 23725:2024 become critical. ISO 23725 focuses on the interface between autonomous systems - such as autonomous haulage - and fleet management systems in surface mining. It sets out how information about tasks, routes, positions, statuses and events should be exchanged.
In practical terms, it defines a common language that an autonomy system and a fleet management system can both speak. If both sides implement ISO 23725, an FMS from one vendor can dispatch and monitor an autonomous haulage solution from another without bespoke integration work each time.
Driving robots typically sit inside that autonomous system layer. They are the part that turns “go to this point at this speed” into steering angles and pedal positions. When the autonomy layer exposes an ISO 23725-compliant interface to the fleet management system, you get the best of both worlds: a universal hardware interface to a heterogeneous fleet, and a standardised software interface to the mine’s dispatch and planning environment.
This means mines can introduce retrofit robotics without replacing their entire FMS, and they can bring in new autonomy vendors or upgrade components over time without restarting integration from zero. Autonomy becomes more plug-and-play at the system level, not just at the vehicle level.
Safety, governance and people
Faster, more flexible deployment does not reduce the importance of safety or governance. If anything, it raises expectations on how carefully safety is engineered and managed. Retrofit robots must have clear fail-safe behaviour: emergency stop functions, predictable braking, defined handover procedures to manual control and robust operating envelopes agreed with operations and maintenance teams. They should integrate with existing site safety systems such as collision avoidance and proximity detection, and they must respect geofences and exclusion zones in the same way as any other autonomous platform.
On the human side, autonomy changes roles rather than erasing them. In-cab drivers may transition into remote supervisors or controllers, overseeing several vehicles from a control room. Maintenance staff begin to work not only on engines and hydraulics but on actuators, sensors and computing hardware. Planners and dispatchers learn to schedule autonomous and manual fleets together, understanding where handoff points should be and how to manage mixed-traffic environments. Because retrofit robots can be rolled out gradually - one truck, one route, one use case at a time - sites can adjust their operating model and workforce skills step by step instead of attempting a big-bang transformation.
A more accessible autonomy future
When you combine retrofit driving robots, open or modular autonomy software and interoperability standards like ISO 23725, a different picture of mining autonomy emerges. A brownfield site might start by automating just one high-risk haul or a repetitive water cart route using the trucks it already owns. A mid-tier operator with a mixed fleet can run autonomy across different OEMs without negotiating separate integrations for each brand. A contractor can equip part of its fleet with robots and offer autonomy-ready services to multiple clients, plugging into whichever FMS or autonomy platform those sites prefer.
In this model, autonomy is no longer a single, monolithic decision tied to one vendor, one fleet or one mega-project. It becomes a flexible toolkit that mines can apply where it makes the most sense: to remove people from high-exposure tasks, to smooth out repetitive cycles, to improve consistency and to unlock new operating modes. Driving robots, installed and commissioned in hours rather than months, are a key enabler of that shift. By bypassing the CAN-bus bottleneck and working hand in hand with interoperability standards, they help turn autonomy from a rare, flagship capability into something that ordinary operations can realistically adopt.