Dms Demo 16022019 1020 Promo

The Role of Machine Learning in Autonomous Vehicles

Dec. 3, 2020
The prospect of a future where we don’t have to drive is very appealing for many. This collective eagerness to see autonomous vehicles on our streets presents an exciting opportunity that many automotive manufacturers are looking to exploit.

The prospect of a future where we don’t have to drive is very appealing for many. This collective eagerness to see autonomous vehicles on our streets presents an exciting opportunity that many automotive manufacturers are looking to exploit.

Those that succeed will be able to tap into a huge potential market: The semi and fully autonomous vehicle market in North America alone was worth $1.7 billion in 2016 and is projected to grow to more than $26.2 billion by 2030.1

To ensure the safety and security of autonomous vehicles, countries will need to have appropriate infrastructure in place. And law-making authorities will have a duty to legislate and regulate the industry, both globally and locally.

But it’s the manufacturers and providers that have the most responsibility to ensure that self-driving cars and trucks can operate safely. It’s here that machine learning (ML) is being used to input into the development of autonomous-vehicle technology.

Determining how to provide those safe, economical, and practical driverless vehicles is one of the most testing technical challenges of our era. Machine learning is helping companies rise to that challenge. But what role will it play? And how will it shape global transport in the future?

Why Do We Need Autonomous Vehicles?

While it seems appealing to sit back and let the vehicle take charge of the driving, is this just pandering to innate human laziness and our need to fit ever more into our busy schedules? Or are there other reasons to champion the development of autonomous vehicles?

Globally, approximately 1.25 billion road traffic deaths occur every year.2 And according to the U.S. Department of Transportation, “The major factor in 94 percent of all fatal crashes is human error.”3 So, reassuringly, greater use of autonomous vehicles could limit those mistakes that humans are making and eliminate millions of otherwise avoidable deaths.

For the commercial sector, autonomous vehicles have the added attraction of lowering costs. Driverless delivery means reduced labor costs for truck drivers, plus the additional efficiencies associated with staff being able to do something more productive while the vehicle does the driving.

How Machine Learning Can Be Used in Autonomous Vehicles

Although autonomous vehicles are principally only in the prototyping and testing stages, ML is already being applied to several aspects of the technology used in advanced driver-assistance systems (ADAS). And it looks set to play a part in future developments, too.

Detection and Classification of Objects

Machine learning is being deployed for the higher levels of driver assistance, such as the perception and understanding of the world around the vehicle. This chiefly involves the use of camera-based systems to detect and classify objects, but there are also developments in LiDAR and radar as well.

One of the biggest issues for autonomous driving is that objects are wrongly classified. The data gathered by the vehicle’s different sensors is collected and then interpreted by the vehicle’s system. But with just a few pixels of difference in an image produced by a camera system, a vehicle might incorrectly perceive a stop sign as something more innocuous, like a speed limit sign. If the system similarly mistook a pedestrian for a lamp post, then it would not anticipate that it might move.

Through improved and more generalized training of the ML models, the systems can improve perception and identify objects with greater accuracy. Training the system—by giving it more varied inputs on the key parameters on which it makes its decisions—helps to better validate the data and ensure that what it’s being trained on is representative of true distribution in real life. In this way, there isn’t a heavy dependence on a single parameter, or a key set of particulars, which might otherwise make a system draw a certain conclusion.

If a system is given data that’s 90% about red cars, then there’s a risk that it will come to identify all red objects as being red cars. This “overfitting” in one area can skew the data and therefore skew the output; thus, varied training is vital.

Driver Monitoring

Neural networks can recognize patterns, so they can be used within vehicles to monitor the driver. For example, facial recognition can be employed to identify the driver and verify if he or she has certain rights, e.g., permission to start the car, which could help prevent unauthorized use and theft.

Taking this further, the system could utilize occupancy detection to help optimize the experience for others in the car. This might mean automatically adjusting the air conditioning to correspond to the number and location of the passengers.

In the short term, vehicles will need a degree of supervision and attention from someone designated as the “driver.” It’s here that recognition of facial expressions will be key to enhancing safety. Systems can be used to learn and detect signs of fatigue or insufficient attention, and warn the occupants, perhaps even going so far as to slow or stop the vehicle.

Driver Replacement

If we take full autonomy as the ultimate aim of autonomous vehicles, then automatic systems will need to replace drivers—supplanting all human input entirely.

Here, machine learning’s role would be to take data input from a raft of sensors, so that the ADAS could accurately and safely make sense of the world around the vehicle. In this way, the system could then fully control the vehicle’s speed and direction, as well as object detection, perception, tracking, and prediction.

However, security is key here. Running on autopilot will require extremely effective—and guaranteed—ways of monitoring if the driver is paying attention or can intervene if there’s a problem.

Vision

Deep-learning framework software like Caffe and Google’s TensorFlow uses algorithms to train and enable neural networks. They can be used with image processing to learn about objects and classify them, so that the vehicle can readily react to the environment around it. This may be for lane detection, where the system determines the steering angles required to avoid objects or stay within a highway lane, and therefore accurately predicting the path ahead.

Neural networks can also be used to classify objects. With ML, they can be taught the particular shapes of different objects. For example, they’re able to distinguish between cars, pedestrians, cyclists, lamp posts, and animals.

Imaging can also be used to estimate the proximity of an object, along with its speed and direction of travel. For maneuvering around obstacles, the autonomous vehicle could use ML to calculate the free space around a vehicle, for instance, and then safely navigate around it or change lanes to overtake it.

Sensor Fusion

Each sensor modality has its own strengths and weaknesses. For example, with the visual input from cameras, you get good texture and color recognition. But cameras are susceptible to conditions that might weaken the line of sight and visual acuity, much like the human eye. So, fog, rain, snow, and the lighting conditions or the variation of lighting can all diminish perception and, therefore, detection, segmentation, and prediction by the vehicle’s system.

Whereas cameras are passive, radar and LiDAR are both active sensors and are more accurate than cameras at measuring distance.

Machine learning can be used individually on the output from each of the sensor modalities to better classify objects, detect distance and movement, and predict actions of other road users. Thus. It’s able to take camera output and draw conclusions on what the camera is seeing. With radar, signals and point clouds are being used to create better clustering, to give a more accurate 3D picture of objects. Similarly, with high-resolution LiDAR, ML can be applied to the LiDAR data to classify objects.

But fusing the sensor outputs is an even stronger option. Camera, radar, and LiDAR can combine to provide 360-degree sensing around a vehicle. By combining all of the outputs from the different sensors, we get a more complete picture of what’s going on outside the vehicle. And ML can be used here as an additional processing step on that fused output from all of these sensors.

For example, an initial classification might be made with camera images. Then, it could be fused with LiDAR output to ascertain distance and augment what the vehicle sees or validate what the camera is classifying. After fusing these two data outputs, varied ML algorithms can be run on the fused data. From this, the system can make additional conclusions or take further inferences that assist with detection, segmentation, tracking, and prediction.

Vehicle Powertrains

Vehicle powertrains typically generate a time series of data points. Machine learning can be applied to this data to improve motor control and battery management.

With ML, a vehicle isn’t limited to boundary conditions that are factory-set and permanently fixed. Instead, the system can adapt over time to the aging of the vehicle and respond to changes as they happen. ML allows for boundary conditions to be adjusted as the vehicle system ages, as the powertrain changes, and as the vehicle is gradually broken in. With flexibility of boundary conditions, the vehicle is able to achieve more optimal operation.

The system can adjust over time, changing its operating parameters. Or, if the system has sufficient computing capacity, it could adapt in real time to the changing environment. The system can learn to detect anomalies and provide timely notification that maintenance is required, or give warnings of imminent motor-control failure.

Safety and Security in Autonomous Vehicles

Undoubtedly, the most important consideration with autonomous vehicles is that they’re propelled safely and don’t cause road traffic accidents. This involves the functional safety of the vehicle’s system and its devices, as well as ensuring the inherent security of the network and systems that power it.

Functional Safety and Device Reliability

Machine learning has a part to play in ensuring that a vehicle remains in good operating order by avoiding system failures that might cause accidents.

ML can be applied to the data captured by on-board devices. Data on variables such as motor temperature, battery charge, oil pressure, and coolant levels is delivered to the system, where it’s analyzed and produces a picture of the motor’s performance and overall health of the vehicle. Indicators showing a potential fault can then alert the system—and its owner—that the vehicle should be repaired or proactively maintained.

Similarly, ML can be applied to data derived from the devices in a vehicle, ensuring that their failure does not cause an accident. Devices such as the sensor systems—cameras, LiDAR, and radar—need to be optimally maintained; otherwise, a safe journey couldn’t be assured.

Security

Adding computer systems and networking capabilities to vehicles brings automotive cybersecurity into sharper focus. ML can be used here, though, to enhance security. In particular, it can be employed to detect attacks and anomalies, and then overcome them.

One threat to an individual car is that a malicious attacker might access its system or use its data. ML models need to detect these sorts of attacks and anomalies so that the vehicle, its passengers, and the roads are kept safe.

Detecting Attacks and Anomalies

It’s possible that the autonomous classification system within a vehicle could be maliciously attacked. Such an offensive attack may deliberately make the vehicle misinterpret an object and classify it incorrectly. This sort of attack would need to be detected and overcome.

An offensive attack could impose the wrong classification on a vehicle, as in the case of a stop sign being perceived as a speed-limit sign. ML can be used to detect these kinds of adversarial attacks and manufacturers are beginning to develop defensive approaches to circumvent them.

It’s by delivering robust systems around the ML model that such attacks can be defended. Once again, training is important here. The aim is to create a more generalized way for the ADAS to make its decision. Employing training to avoid overfitting avoids a heavy dependence on one key particular—or a set of them. So, because the system has a greater breadth of knowledge, the input that’s been maliciously manipulated will not cause it to wrongly change the outcome or the perception.

Hacking, Data, and Privacy Concerns

Averting hacks on the connected networks that vehicles run on is paramount. In a best-case scenario, multiple hacked vehicles could come to a halt and cause gridlock. But at worst, an attack may result in serious collisions, injuries, and deaths.

More than 25 hacks have been published since 2015. In the largest incident to date, a hackable software vulnerability caused Chrysler to recall 1.4 million vehicles in 2015. The vulnerability meant a hacker could assume control of the car, including the transmission, steering, and the brakes.

There’s also a potential market for car-generated data. Data can be obtained on the occupants of a vehicle, their location, and movements. It’s estimated that car-generated data could become a $750 billion market by 2030.4 While this data is, of course, of interest to genuine parties, like the vendors and auto-parts manufacturers, such valuable data also attracts hackers.

Developing systems that better maintain the cybersecurity in cars is therefore vital. As many as 150 electronic control units (ECUs) occupy every car, and they require around 200 million lines of software code to run them. With such a complex system comes a greater susceptibility and vulnerability to hacking.

With an estimated 470 million connected vehicles on the road by 2025 in Europe, the U.S., and China alone,5 the wireless interfaces they employ need to be secure to prevent scalable hacking attacks. Those supplying the computer systems that power autonomous vehicles must ensure that their systems are secure and uncompromisable.

Privacy

Privacy concerns abound with autonomous vehicles. There’s data associated with the driver and the family or other people that use the vehicle. With navigation, certain GPS information would allow the car to be tracked, or its journey history to be itemized. If an in-cabin facing camera is being used for driver monitoring, personal information will be collected about each of the occupants of the vehicle, including where they went, with whom, and when. Other data from outside the car might be collected, too. This could impact other road users outside the vehicle who have no knowledge that they might be recognizable, or data about them is being collected.

With all this, understandable concerns arise in terms of data collection being regulated so that it’s processed legally and correctly. And more than that, there’s again a security risk that the data may be accidentally leaked, or even intercepted, meaning that data could be accessed and used without those legal protections being applied.

Data is also valuable in a competitive way. As the vehicle is driving, data is continually being collected about what it’s seeing, the classification methodology used by the ADAS, its final conclusions, and so on. If this is accessed, it can be reverse-engineered to extract the information and copy it over to another environment.

There’s considerable effort in the industry toward securing the actual models that have been derived from the machine learning. Specifically, those that determine how the system classified what it perceived, how it worked out the speed it was moving, and which direction it was going to move next.

Can Machine Learning Replace Traditional Vision Algorithms?

Machine learning can be employed as a replacement for traditional computer-vision algorithms, making it useful in autonomous vehicles for object detection, classification, segmentation, tracking, and prediction. Doing this will impact the system’s level of determinism, safety, and security.

In more deterministic methods, such as real-based methods or traditional computer vision, the engineer or computer scientist developing the vision algorithm determines the key parameters required for making a decision. But in ML, the algorithm itself chooses the criteria that it deems matter most for it to make the right decision.

Thus, the quality of the training data is extremely important here. Validating how and why a decision is made can sometimes be difficult, and it’s not always clear what precisely led to a ML system’s decision.

With traditional computer vision, the key criteria are pre-identified. So, it’s known, for example, why a system has identified an object as a pedestrian. The quality of the data set becomes vitally important if the system is only being presented with data that says, “this is a pedestrian.”

With ML, we can’t determine if the system really looked at the same criteria that were needed previously in a traditional algorithm, or whether it developed its own set of criteria. How then can the reasoning that was used be duplicated or repeated? There's typically an accuracy or confidence rating to the decision-making of ML algorithms. And with classification, for example, the system might be 90% certain that an object is a pedestrian and maybe 10% sure that it’s a lamp post.

Additional training might inch that certainty up to 92% or 93%, but it may never be possible to achieve 100%. When this is applied to safety-critical applications like autonomous driving, though, there’s no room for classification errors, The system needs 100% certainty that something is classified correctly, to ensure that what it deemed to be an inanimate object isn’t going to step in front of the vehicle.

The Benefits of Using ML for Object Detection and Classification

While it may not inherently be more accurate than vision-based systems, over time, ML algorithms can achieve greater degrees of accuracy. Other systems eventually reach a plateau at a certain level, as they can’t achieve any greater accuracy. But with ML, as more training is applied, and with more rigorous training—as well as gradual augmentation of, and improvements to, the model—it’s possible to achieve greater levels of accuracy.

Machine learning is also both more adaptable and scalable than vision systems. Because the ML system creates its own rules and evolves based on training, rather than engineer input, it can be scaled up and applied to other scenarios. Effectively, the system adapts to new locations or landscapes by applying its already-learned knowledge.

The ease with which ML platforms can identify trends is also a plus. They can quickly process large volumes of data and readily spot trends and patterns that might not be so apparent to a human looking over the same information. Algorithms used in autonomous vehicles need to apply this same sort of data review over and over. Thus, it’s an advantage to have a system that can do it quickly and with a high degree of effectiveness.

ML algorithms can adapt and evolve without human input. The system is able to identify and classify new objects and adapt the vehicle’s response to them, even dynamically, without any human intervention or correction. Again, broad and deep training is required so that the system directs the vehicle to respond appropriately, but this is a relatively simple process.

Using a ML approach avoids reliance on determinist behavior. That’s to say, it’s impossible to always input the same values in the same way—not all cars are identical, yet they’re still cars—but any autonomous system needs to identify cars as cars, despite their differences. It needs to produce entirely predictable results, despite the inconsistency in the input. An autonomous vehicle needs to be able to work in the real world, where there are variances, uncertainty, and novelties.

The Limitations of Machine Learning for Object Detection and Classification

One of the drawbacks of using ML for object detection and classification is that huge data sets are required. And not only that, but as discussed before, systems need to be trained with a large variance of scenarios, so that there’s no bias in the data. Skewed data will not present true-to-life outcomes, so a car could react to a situation completely differently—and perhaps dangerously—to how human intelligence would interpret it.

To get enough training to avoid such data bias takes time and huge amounts of computing processing resources. And there’s also the time required to validate and check that the training is working and the ADAS is acting as expected to the various scenarios it’s presented with.

One aspect of driving that ML may not be able to cope with is when it comes to acknowledging other road users. Humans are accustomed to making eye contact to acknowledge a pedestrian beginning to use a crossing or gesturing to another car to pull in front. But it’s unclear how this could be replicated by an autonomous system, and hence trained, using ML. Until that puzzle is solved, and until systems can come closer to human emotional intelligence and human instantaneous predictions that are needed to cope with these sorts of situations, it may never be possible in autonomous vehicles.

Machine learning is also limited when it comes to teaching a system how to respond to something that humans have innately. For instance, the “sixth sense” that a car may be about to pull in front of you, or a truck may suddenly slam on the brakes.

Allowing a machine to be in charge of its own decisions is a difficult premise for many. The determinism of a system is something that many people—except perhaps the computer scientists—find problematic. Some people are suspicious that “the machines will take over” and their trust in self-driving vehicles is low. Despite the predictions that there would be fewer accidents and deaths with autonomous vehicles, trust is further damaged by incidents in testing—like the Uber self-driving car that ran over and killed a pedestrian in Arizona6 or the Tesla autopilot driver who crashed while playing a video game.7

A fully autonomous system, at Level 5, would require perfect functional safety, which isn’t easy to guarantee using machine learning alone. With the amount and variance of training required, added to the difficulty in replicating human intelligence, systems aren’t yet sufficiently able to accurately detect and classify objects.

Is the Machine Learning Based Approach the Right One?

Despite some drawbacks, the benefits of using ML for object detection and classification are strong. It’s not imperative that the modeling and perception elements of a fully autonomous vehicle are achieved at the highest levels, like those set out in ASIL D (Automotive Safety Integrity Level D). At ASIL D levels, the system must be fully available almost all the time. This would typically be achieved with built-in redundancy as well as greater scrutiny and discipline in the development process itself.

Achieving ASIL D levels is difficult and costly. There was an initial expectation that everything in an autonomous car relating to the actuation would have to achieve this highest level of automotive quality and process control. But there are ways of achieving the availability of the system and safety without needing ASIL D requirements on every single component in the chain, especially when it comes to the modeling and perception elements.

For example, with multiple sensors like camera, LiDAR, and radar, there’s a degree of overlap in the fields of perception. This gives some backup and security. If, for example, the camera fails, then the LiDAR or radar would deliver enough of the same field of “view” so that between them, these sensor modalities could bring a certain level of redundancy. Therefore, in designing their modeling and perception systems, companies could stick to systems that don't require ASIL D devices, yet still deliver a good model of the world around the vehicle.

Future Trends in Machine Learning for Autonomous Vehicles

The major tech companies and main car manufacturers are all vying to develop their autonomous vehicle offerings. They each want to be the first to market in order to dominate the field. There’s a lot of activity at the moment with developments in the connected infrastructure, the emergence of 5G technology, moves toward the creation of new legislation to regulate the industry, and even a drive toward mobility as a service (MaaS).

There are also changes in how machine learning is being used. These are the future trends that we believe will drive the autonomous-vehicle market.

Imaging Radar

Imaging radar is a high-resolution radar that can both detect and classify objects. Apart from its basic radar capabilities, imaging radar also offers greater density in the reflected points that it collects. So, not only does it detect an object and determine its proximity, but it also uses the collection of all the points to start creating outlines of the objects that it’s picking up. From those outlines, it’s possible to begin to make decisions about the classification of the object that’s being reflected.

Imaging radar has comparatively low development costs. And for a sensor that leverages all of the benefits of radar in detection and distance, as well as bringing classification capabilities, that’s an exciting trend for the future, perhaps even allowing radar to be relied on more than LiDAR.

Compute Performance

Training is the core aspect of machine learning. To get anywhere close to human capabilities and avert the risk of anomalies, the training required needs repeated exposure of the system to the varied and less-common situations that occur on the urban roads, highways, and freeways.

As more and more road miles are gathered by car manufacturers, and more objects require detection and classification, the data sets being created ramp up.

The growth of these data sets presents a challenge: having sufficient compute performance on which to deploy those trained networks. Consequently, one innovation that’s emerging is the creation of highly optimized acceleration techniques. Developments in information processing have seen great progress, such as deploying trained networks directly onto integrated circuits. These new chips enable complex networks to be deployed at low cost and with low power. Cost-optimized and area-efficient silicon solutions like this will be able to drive the market forward and overcome the issues of computational performance.

LiDAR

Another emerging trend in machine learning for the automotive market is to transfer the techniques currently being deployed for camera-based classification and detection over to LiDAR networks. So, rather than using a two-dimensional picture frame to determine and classify objects, it’s possible to use three-dimensional data derived from LiDAR reflections and then implement trained networks on that information. In this way, the system can determine aspects such as where the road begins and ends; location of the cross junctions; or location the traffic lights.

This has been possible using convolutional neural networks (CNNs), which is a class of deep learning. Companies are already working on this technology—with success—and it’s an area that’s showing great promise.

Fully Integrated Microcontroller Units

Fully integrated microcontroller units (MCUs) will enable the next generation of autonomous vehicles. At Level 5, MCUs will allow the vehicle to detect a fault, then automatically bring the car safely to a stop without any intervention from the driver.

Current MCUs are derived from graphics or from desktop and enterprise computing. They’re not solutions specifically designed for automotive applications—and aren’t powerful enough on their own. As a result, they’re used in conjunction with a separate, high-performance processor. The MCU sits alongside the processor and communicates with the vehicle. This gives the system the direct interface needed for safe communication with the vehicle.

The MCU can run safety self-checks on the high-performance integrated circuit, with diagnostics that make sure the system-on-chip (SoC) is healthy and performing optimally. The MCU acts as a core processor to overcome the inherent deficiencies that even reside in high-performance processors, in that they’re not designed for the automotive environment.

A trend here is integrating MCU capability into the processor to deliver a single-chip solution. Thus, the MCU functionality will be embedded directly into chips by manufacturers. In this way, the products are specially designed to meet the higher processing needs of self-driving vehicles. This negates the need for a separate chip, saving cost while also improving the quality and capability of the processor.

The Future of Machine Learning and Autonomous Vehicles

So, how will machine learning shape the autonomous-vehicle industry in the future? And when will we see fully autonomous vehicles on the roads?

It’s unlikely that we can expect full-scale production models of autonomous vehicles before 2025, and Level 5 cars before 2035.8 Beyond that, it remains to be seen how long it might take before the number of driverless cars outstrips those driven manually.

Nevertheless, driverless cars and trucks are certainly on the horizon. Thanks to ML, these vehicles are set to bring greater mobility to millions of vision-impaired and disabled people; enable deliveries in more remote areas, getting goods to people more quickly and cost-effectively and connecting communities; and more than anything, improve road safety, reducing road traffic incidents, injuries, and deaths.

But to transform our lives for the good, some factors still need to come together. Car manufacturers will have to do their part to ensure the safety, reliability, and viability of these vehicles. They, of course, want a return on their investment in research and development, but they will need to prove the safety and security of driverless vehicles before consumers will readily accept them.

Governments have a part to play, too. They will need to legislate on the autonomy of vehicles and the absence of a driver. It’s a certainty that different countries will take different approaches to this matter. And even within countries, different legislatures—for example, within the U.S.—might see things differently. Cooperation and collaboration here will go a long way toward helping the industry to provide standardized vehicles with similar or identical features.

Governments could also help to encourage the take-up of autonomous vehicles with incentives. In the same way that many legislatures have encouraged the use of electric cars, or those that cause less harm to the environment, tax incentives could promote the use of autonomous vehicles. This would be beneficial to the countries or states that do this because the payoff would be fewer accidents and less pressure on healthcare. Equally, we could see insurance companies offering lower premiums for driverless vehicles, perhaps at a reducing scale according to the autonomy level.

Finally, collaboration is key. Different companies each have expertise in their own areas to do with machine learning. Some specialize in camera-based perception or LiDAR processing; others have invested in fusing sensor inputs; and still others have expertise in the pathfinding and trajectory planning decisions and the translation of those into an enjoyable actuation of steering, acceleration, and deceleration. These companies will need to collaborate to help car manufacturers ut their autonomous-vehicle designs into production.

Ali Osman Ors is Director, Automotive AI Strategy and Strategic Partnerships, at NXP Semiconductors.

References:

1. TechSci Research, North America Semi & Fully Autonomous Vehicle Market - Competition Forecast & Opportunities, 2016-2030.

2. World Health Organization, Number of road traffic deaths 2013.

3. U.S. Department of Transportation, Automated Driving Systems 2.0: A Vision for Safety.

4. McKinsey & Company, Monetizing Car Data, September 2016.

5. Strategy& (PwC’s strategic consulting team), Digital Auto Report 2017.

6. https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

7. https://www.bbc.co.uk/news/technology-51645566  (Feb. 26, 2020)

8. The Society of Motor Manufacturers and Traders, Connected and Autonomous Vehicles: 2019 Report / Winning the Global Race to Market.

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!