Page 32 - First special issue on The impact of Artificial Intelligence on communication networks and services
P. 32
,78 -2851$/ ,&7 'LVFRYHULHV 9RO 0DUFK
Fig. 1 shows a pattern of the current market of automotive
cars. NVIDIA is leading the market with its Drive series
GPU platforms, and has already built cooperation with car
manufacturers like Audi, Tesla, Daimler, etc. Intel is also
focusing on this area. It has acquired many relevant compa-
nies such as Mobileye, Nervana, Movidius and Altera, and
has collaborated with BMW and Delphi to build its ecosys-
tem circle. It has also released products such as Atom A3900
for the automotive scene[11]. Another chip giant Qualcomm
is also trying to make inroads in this market. It has release
dedicated processors like Snapdragon 602A and 820A chips
[12], and it has bought NXP to strengthen its impact in the
ADAS market.
Figure 2. A block diagram for ADAS system description.
Many ADAS solutions have chosen graphic processing unit
(GPU)-based systems to carry their autonomous vision al- 2. TRENDS IN AUTONOMOUS VISION
gorithms, not only because of their powerful computational
ability since GPU-based systems can offer massive paral- 2.1. An overview of an ADAS system
lelisms in datapaths and the latest GPU processors can offer a
throughput of several TOPS such as the NVIDIA Drive PX2 An ADAS system collects data of the surrounding envi-
system [13] with Xavier chips, but also because of the robust ronment from sensors and remote terminals such as cloud
and efficient developing environment support such as CUDA servers and satellites, and makes real-time recognition of
[14] and cuDNN [15]. surrounding objects to assist drivers or automatically make
judgements for a better driving experience and avoid poten-
tial accidents. A typical system is depicted in Fig. 2. As we
While GPU can offer a computing speed of TOPS, the power
can see, there could be a series of sensors on vehicles such
consumption can often be the bottleneck for in-car system
as cameras, radars, LIDAR and ultrasonics to get input for a
implementation as some modern GPUs can cost 200-300 W.
real-time surrounding condition description, and processors
One solution is to improve the power efficiency, and this can
will react to give driver warnings or control the mechanical
be achieved through the dedicate logic customization, and
system of the vehicle in some certain circumstances with
reconfigurable processors can be a suitable choice. One rep-
the trained algorithm models stored in the memory. Com-
resentative reconfigurable processor is FPGA. FPGA suppli-
munication interfaces can help to locate cars with map data,
ers Xilinx and Altera have already introduced their FPGA
and can obtain traffic information from datacenters and even
products into ADAS scenarios such as Zynq-7000 [16] and
offload some compute-intensive tasks to cloud servers, and
Cyclone-V [17] series SoC. While the power is around 10
this can be even more powerful in the future as much faster
W, FPGA can also get a peak performance of around 100
communication protocols like 5G is already on the way.
GOPS. Together with the features of multi-threading, paral-
lel processing and low latency, FPGA could be expected to Various functions can be achieved with an equipped ADAS
be a favorable choice for autonomous vision systems. system, and autonomous vision has taken up a great portion
of this. As we can see from Fig. 3, functions such as vehi-
cle detection (VD), lane departure warning (LDW), forward
Naturally, we can convert an FPGA design into an application-
collision warning (FCW), pedestrian detection (PED), traffic
specific integrated circuit (ASIC), and the circuit system can
sign recognition (TSR), etc. are achieved by the autonomous
further improve its efficiency by at least one order of mag-
vision system itself or together with audio and radar systems.
nitude with its reconfigurability maintained, which makes
Hence, it is important to find an efficient solution for au-
ASIC another mainstream ADAS solution. Suppliers in-
tonomous vision processing. Next, we will take an overview
cluding Qualcomm, Intel, Infineon, and Texas Instruments
of the vision algorithms, and present an analysis of potential
have released their ASIC-based SoC products for ADAS.
hardware carriers.
One representative product is Intel Mobileyes EyeQ4 chip
[18], which will be released in 2018 and can get a 2.5 TOPS
performance drawing only 3-5 W. The low power feature 2.2. Traditional algorithms of autonomous vision
makes it quite suitable for in-car supercomputing.
For most autonomous vision functions such as PED, VD,
Both the chances and challenges for reconfigurable in-car LDW, TSR, etc., the kernel algorithm can be generalized into
systems lie ahead. This article will firstly analyze the de- a 2D object detection question. As shown in Fig. 4, a tradi-
velopment of modern RTAV algorithms, then evaluate the tional detection process consists of the following stages: im-
performance of each hardware platform, and finally discuss age preprocessing, region of interest (ROI) selection, feature
how we can build a more efficient reconfigurable system for extraction and classification.
RTAV. For traditional algorithms, usually steps like gain and expo-
,QWHUQDWLRQDO 7HOHFRPPXQLFDWLRQ 8QLRQ