Avatar

In the previous blog, we looked at Connected vehicle services and telemetry, digging into the current data volumes. In this blog, we’re going to take a look at the Connected vehicle of tomorrow. How do the vehicle services change and what data volume might it represent?

What will the data consist of tomorrow?

As vehicle manufacturers increase the number of models within their vehicle fleets that are a) connected and b) carrying more data gathering capabilities from a range of different sensor types, one should expect that the data volumes that the manufacturers will gather will increase. In addition, new features and functionality will be added that will make use of network connectivity.

One of the major focus areas is safety – vehicle-to-vehicle, vehicle-to-infrastructure as well as vehicle-to-vulnerable road users, such as cyclists and pedestrians. There are many safety use-cases detailed by bodies such as the US NHTSA. The onboard sensor sets in the vehicle will certainly contribute to the vehicle’s own ability to detect and (potentially) avoid risk scenarios. The current technologies to support communication-based safety applications such as ‘forward-collision warning’ and ‘blind intersection warning’, are centered on the use of Dedicated Short-Range Communications (DSRC) and the emerging Cellular-Vehicle-to-Everything (C-V2X). In such use-cases, latency is a vital component. In the case of DSRC, dedicated hardware is used for data communications to and from the vehicle. It is likely that C-V2X will also require a dedicated modem rather than trying to share bandwidth with a modem that is supporting infotainment services or vehicle telemetry. From the perspective of data-volume to the vehicle manufacturer, the safety data is exchanged over a different connectivity interface, however, it is very likely that indications will be recorded in the general vehicle telemetry.

A number of manufacturers have demonstrated or announced the integration of voice-control agents such as Amazon Alexa and Google Assistant. Closer inspection appears to indicate that these integrations will require network connectivity to work and that they will provide a voice control to existing voice assistant functionality including smart home integrations. A number of vehicle manufacturers offer voice assistance integrations that enable capabilities such as checking the fuel level or enabling the heat or air-conditioning through existing APIs. In both the Alexa and Assistant cases, the waveform of the spoken command is transmitted for processing but this is a relatively small amount of data. And of course, it won’t work if there is no network coverage! It is certainly reasonable to suggest that forms of voice control that do not require network connectivity could be integrated within the vehicle, offering the ability to adjust in-vehicle settings by voice such as cabin temperature or seat position. Manufacturers such as Byton have announced their plan to offer gesture control capabilities. Once again, it is likely that the vehicle manufacturers will identify value in understanding how customers use these features so may well want to obtain associated telemetry.

High-definition mapping is often mentioned with respect to next-generation vehicles. It is important to understand that while HD Maps provide fine-grain detail and precision accuracy, they are not intended for use by humans, rather they are intended for consumption by automated systems (machines). A complex mapping process is required in order to generate the various layers that comprise the map. As discussed previously, navigation solutions may be integrated into the vehicle or may be operating on a smartphone, with HD Map information being obtained via a cellular connection, downloaded to the navigation system while the vehicle is in motion. The volume of data for HD Map is higher than that of today’s mapping solutions, such as Google Maps, with an estimate of 10MB per mile of roadway at a resolution of 5cm. ‘Human readable’ or ‘Standard Definition’ maps can be generated from HD Map information, requiring ~1/4 of the data volume. While regular vehicles will be able to contribute updates to particular layers of the map, such as reporting temporal information like traffic congestion, they will not be involved in the creation and maintenance of the base-layer topology since this requires specialist equipment.

Another active area of development is that of Advanced Driver Assistance Services (ADAS). Many manufacturers offer capabilities such as Adaptive Cruise control, using sensor inputs to adjust the vehicles speed based on proximity to the vehicle in front. Others offer functions such as Traffic Jam assistance where below certain speeds, the vehicle will accelerate and brake according to the behavior of the vehicle in front as well as staying in the traffic lane. TomTom are exploring the intersection of HD Map information and ADAS, looking at how information such as high-definition lane closure information (identify which lane is closed) or obstruction information could be incorporated into driver support solutions. Such solutions will require network connectivity in order to receive the necessary temporal data.

Tesla’s Autopilot and Audi’s Traffic Jam Pilot both offer a series of driving capabilities under certain circumstances. More advanced capabilities demonstrated by projects such as Toyota’s Guardian program show increasing levels of sophistication and ability. None of these solutions requires network connectivity to operate and today, are only able to operate under certain circumstances.

It is worth noting the use of inward-facing cameras as part of the Guardian solution to determine the driver’s state of awareness. Such capabilities, known as Driver Monitoring Systems (DMS), will become a significant component of ADAS solutions, especially as more advanced offerings will, rightly or wrongly, offer more opportunities for the driver to focus on things other than what the vehicle is doing. Euro NCAP, the European New Car Assessment Program, has stated in its 2017 report that it will start testing DMS functions in 2020 with a requirement that by 2024, a vehicle must have such a system in order to obtain a ‘5-star’ rating.

The report produced by the Automotive Edge Compute Consortium (AECC) details the ‘Intelligent Driving’ scenario (aka ADAS), outlines data collection from the vehicle of the driver, “including biometric sensor data and control data gathered from various sources including movement logs from in-vehicle sensors and on-board biometric sensors/cameras”. It notes data processing within a cloud-based data center, resulting in parameter updates being sent to the vehicle with a generated data volume of 7.5GB per-duty cycle, collected on a periodic basis. Even though it is likely that the range of circumstances under which ADAS capabilities can be used will increase over time, it is also feasible that for many vehicle users, their daily commute may not involve engaging these ADAS solutions. However, it is foreseeable that driver monitoring solutions could become a standard function enabled whenever the vehicle is in motion.

As the Intel slide presented at the beginning of this blog series identified, the various sensors that manufacturers install within the vehicle will generate a considerable volume of information. The exact number of each sensor type can lead to a wide variation in the amount of data generated.

One must bear in mind that the raw data volume that is being generated is primarily processed within the vehicle since decision-making functionality needs to be executed within the vehicle. Only a subset will be transmitted back to the vehicle manufacturer.

Applications will be key to determining the data volume to and from the vehicle, vehicle telemetry and sensor data being just one such application. Actual numbers from vehicle manufacturers for future vehicles are hard to obtain, with various papers suggesting next-generation vehicles sending 25GB per hour. Information provided by OEM A (a Japanese auto-maker) indicates that they intend to move towards a ‘predictive health maintenance’ suite of applications and in so doing, expect the data volume to increase from the current 10-15MB per duty-cycle to 50MB in 2020. This will rise towards 300-350MB per duty-cycle in 2022. Their ‘2nd-generation’ Connected cars are forecast to have a data collection volume in the 35-65GB per duty-cycle range. It is not known at this time as to how much of this data will be collected while the vehicle is in motion versus collected while stationary.

In the final blog post – we’re going to take a look at the cost of data and how the financial model will affect manufacturer’s thinking. It will also consider what the impact of connected cars will be on cellular networks in the future.