Use Cases Use Cases Create your embedded solution ___
System on Modules Operating Systems AI Accelerator IP Cores Middleware Software Carrier Boards Sensor Modules Sensors Optics Use Cases
Use Cases
embedded vision system
zurück zum Start / back to start

System on Modules

A System on Module (SOM) is a small, modular computer system integrated on a single, compact board. It contains a microprocessor, memory, a graphics processor unit, and a set of peripherals. SOMs are popular for developing devices to run on a single board. They allow designers to integrate a wide range of functions on a compact board. These modules can also be used in a variety of applications, including automotive, industrial automation, robotics, medical, military technology and consumer electronics.

ARM + FGPA

Zynq Ultrascale +02 | +05 MPSoCs

CPU
Quad Core ARM, max 1,5 GHz Cortex A53 MR Core
RAM
2 GB DDR4 x64 | 2 - 16 GB DDR4 x16
eMMC
8 GB | 8 - 64 GB
FGPA
7U+02 | 7U+05

X86 based system

Atom E3845 | E3940

CPU
Quad Core 1.91 GHz | 1.6 GHz
RAM
4 GB DDR3-1333 | 8 GB DDR4 -2133 Dual
SSD
16 GB | 32 GB

FGPA board

Artix 7-75 | 7-100

MHz
150 via PCIe 2.0 x 1 | 200 via PCIe 2.0 x 4 NET OCC

Operating Systems

Linux

Petalinux Yocto is an operating system based on Yocto, developed specifically for Xilinx hardware. It provides a powerful and reliable platform for embedded system development on Xilinx hardware. Petalinux Yocto provides a complete Linux development environment based on the Yocto project distribution. It also has a BSP (Board Support Package) for Xilinx hardware, including FPGAs and system-on-chip solutions. Petalinux Yocto includes a full Linux kernel implementation, a complete toolchain, a graphical user interface and a collection of applications. It also provides a development environment for developing user applications that can run on Xilinx hardware.

CentOS 7 is a Linux-based operating system based on the Red Hat Enterprise Linux distribution. It is a free and open operating system designed for use on desktop computers, servers, and in the cloud. Additionally, it is recommended as a server operating system because it provides a stable and secure environment. It offers a wide range of features that make it a popular choice for those who need a powerful and reliable operating system. CentOS 7 has many applications and tools needed for managing and using Linux-based systems. It also offers a wide range of packages available for installation on the system. It is easy to install and offers extensive documentation.

Windows

The IoT Enterprise is a special version of Windows 10 designed for use in industrial and business environments. It provides a stable and secure platform that allows easy management even of multiple devices and protects against external threats such as malware. With a number of different tools and features, applications and updates can be installed and managed faster. Thus, Windows 10 IoT Enterprise is the ideal tool to simplify and automate your IT infrastructure.

AI Accelerator

Training Framework

There are many different AI training frameworks. Training framework for an AI application is a software platform that helps train and optimize AI models. This includes functions such as loading data, defining models, training models, testing models, and optimizing models. With a training framework, developers can develop AI applications faster and more efficiently.

Optimization

Xilinx Vitis AI United is a software platform that enables machine learning and deep learning to be implemented on Xilinx FPGAs and ACAPs. The platform provides a set of tools that enable developers to develop, debug and optimize their applications. These include a compiler, debugger, profiler, optimizer, synthesizer and simulator. With Vitis AI United, developers can accelerate their applications on Xilinx hardware by using the compiler to translate the code into a hardware-accelerated format. The optimizer can then tweak the application to improve performance. Finally, the synthesizer creates a hardware description language that can be implemented on Xilinx hardware.

Runtime DPU

A Digital Processing Unit (DPU) is a special type of FPGA (Field Programmable Gate Array) designed specifically for processing digital data. This type of FPGA is capable of performing complex digital processing tasks required when processing video and audio data. These FPGAs are also capable of performing complex mathematical operations such as filtering, convolution, and other mathematical functions required for processing digital data.

IP Cores

IP cores are prefabricated designs that can be implemented on field programmable gate arrays (FPGA) platforms. They are often used in image processing to master complex tasks in real time that cannot be handled by conventional processors. The IP cores can be licensed individually or as a bundle. Individual IP cores are usually specific to a particular task. However, a bundle contains multiple IP cores that work together to provide the desired functionality.

Bundle: Surface Inspection System

Bundle: Width Measurement (Stereo System)

Homogenization

  • Gain correction in x- and y-direction
  • Continuous- or single shot-mode
  • Up to 4k pixels in x-direction (8k with interpolation enabled)

Homogenization can be achieved by adjusting the brightness, color and texture of each pixel to create a homogeneous image. The process can also be used to prepare the image for other processing purposes such as contrast enhancement and color correction.

Feature Enhancement

  • Variance filter with enhanced functionality
  • Kernel configuration (9x1, 3x3 or 5x3)
  • Signed and unsigned mode
  • HDR mode

Feature Enhancement with Variance Filter is an image processing technique in which a Variance filter is applied to an image to enhance details in an image. The Variance filter increases the contrast differences in an image by increasing the variables in a range of brightness values.

Feature detection

  • Bright and/or dark features
  • X1, X2, Y1 and Y2 coordinates
  • Area and gravity of features
  • Endless and single-image mode
  • Clustering of features
  • Centre of Gravity for bright objects

Feature detection refers to finding light and dark threshold crossings. Feature detection is an important step in image processing, as it helps to identify and extract specific objects in an image.

FFT Algorithms

  • 1024 samples
  • FFT windowing (Gaussian, Blackman Nuttall, Hamming, Hann and more)
  • Padding of data
  • Up to 58kHz FFT calculation rate

The "Fast Fourier Transformation" (FFT), is an important measurement method in audio and acoustic measurement technology. It decomposes a signal into individual spectral components and thus provides information about its composition.
Examples: Vibration analysis / Periodic unevenness

Column and Row adder

  • Calculate row and/or column information
  • Sum of data
  • Minimum of data
  • Maximum of data

The pixels are added together in order to be able to detect long-wave differences in the gray values. The direction can be in a column, in a row, or both together.

Subpixel edge detection

  • Up to 256 rising and falling edges in x-direction per line

Subpixel Edge Detection is a technique for determining the position of an edge in an image. It is based on the interpolation of pixels. For this purpose, the pixel neighbor values are analyzed to locate the edge and to determine its position exactly. This technique is particularly suitable for precision measurement of edges when higher accuracy is required than when pixel values are used.
The results are then displayed in a higher resolution image.

Laser-Line Centre Of Gravity

  • Sub-Pixel CoG detection
  • Height and Intensity per position
  • Clustering in y-direction
  • Multi-Laser-Line detection possible

In image processing, laser triangulation is used to determine a 3D object position with a laser line projected at an angle onto the image field. To do this, an algorithm is used to analyze the image and determine the position of the center of gravity of the laser line. The center of gravity of the laser line is then considered to be the position of the laser line on the object and a 3D position is generated from it.

Quadrature decoder

  • A-B-Signals
  • Forward and/or backward triggering
  • ImageCounter and TimeStamp generation
  • Debounce functionality

The IP core processes the signals from an encoder and generates a digital signal that represents length information and speed information in the machine. It is a very accurate measuring instrument that is often used in robotics, automation and machine vision.

GigE-Vision Transceiver

  • GVSP version 2.0 implemented
  • Up to 4 streams, sending and receiving
  • No CPU or Memory resources required during streaming

A GigE Vision transceiver enables fast and reliable transmission of GigE Vision data. The transceiver takes over the task of switching the data between the FPGA and the GigE Vision device, while the FPGA controller controls the communication between the FPGA and the transceiver.

Middleware

NET SynView SDK

SynView at a glance
SynView is a NET-created development environment based on GenICam/GenTL standards. It contains a high-performance and user-friendly API to make it easier to create applications. SynView is compatible with Windows and Linux. Application engineers can concentrate fully on developing your solution.

The perfect building block for your vision architecture
SynView is the ideal development environment for various requirements. It is compatible with commercial, open source and custom image processing libraries. Current communication protocols and standards are used.

Discover new possibilities and functions
The SynView Explorer allows you to exploit your camera's full potential. Additional functions are available in the NET library. This follows the GenICam Naming Convention. Users of the Open Camera Concept can configure their custom module in SynView API and SynView GUI. These give you full access to all features for your individual solution.

Create applications with SynView
SynView contains a unique source code generator for the creation of applications in mere minutes without any programming. Create, compile and test the source code instantly with the Wizard. A comprehensive framework shows you a list of all available cameras, allows them to be configured, and enables the transmission of images and streams. C, C++ or NET Class Library Wrapper – it's your choice.

Save yourself the bother of programming Source code snippets can be created in SynView with just a few clicks. They are then simply copied and integrated into the application. This means that new camera settings can be tested in Explorer first before they are used in the real application.

Software

Image Processing Libraries

  • Aurora Studio
  • Aurora Library
  • DL (Ltd. use)

Zebra Aurora is a particularly simple and elegant solution for controlling enterprise-wide automation. This clear and powerful user interface makes it easy to set up, deploy and operate intelligent cameras for machine vision.

  • HALCON
  • MERLIC
  • Deep Learning

MVTec HALCON is the comprehensive standard software for industrial image processing with an integrated development environment (HDevelop) that is used worldwide. HALCON helps reduce costs and ensures faster market availability. The flexible software architecture enables rapid development for all machine vision applications.

  • MIL X1

Matrox® Imaging Library (MIL) X1 is a comprehensive collection of software tools for developing machine vision, image analysis, and medical imaging applications. MIL X includes tools for every step in the process, from application feasibility to prototyping.

  • Open CV

OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. OpenCV has more than 47 thousand users and an estimated number of more than 18 million downloads. The library is used extensively by companies, research groups, and government agencies.

  • MATLAB

MATLAB (short for "MATrix LABoratory") is a proprietary multi-paradigm programming language and environment for numerical computation developed by MathWorks.

  • Coake 7 Vision Library

Coake® 7 is the modern operating environment for industrial image processing. From feasibility analysis and project planning to image acquisition, creation of inspection programs, automatic inspection and statistical evaluation, process analysis and documentation - Coake® 7 offers support in every project phase.

Carrier Boards

Carrier boards are the interfaces and power supplies to the embedded vision system and to any peripheral devices or networks. NET's standard carrier boards in smart cameras can be customized in form and function.

Ethernet:

USB:

DisplayPort:

Serial interface:

Fieldbus:

Digital in/output:

Power Supply:

Storage:

Sensor Modules

  • SynView API / SDK
  • GeniCam
  • Multi Sensor

CSI 2 Camera Modules from our sister company Allied Vision gives us a wide range of sensor types. The MIPI modules can be operated in parallel without increasing the space for the computing capacity of the embedded vision system.

  • FPGA Image Pipeline
  • SynView API / SDK
  • 14 Gbit Sensor Data

LVDS Sensor modules with up to 16 data pairs can integrate sensors with a transmission rate of up to 14 GBit. The Sony IMX 250 and IMX 252 as well as the Luxima 1310 are integrated into the embedded vision system at full speed.

  • SynView API / SDK
  • GeniCam USB3 Vision
  • UVC USB Video Class

USB modules with USB 2 or USB 3 operated with different transmission options and corresponding SDK and API:

  • Proprietary standard driver Linux
  • Windows USB 3 Vision with GENI Cam protocol
  • UVC USB Video Class
  • SynView API / SDK
  • GeniCam / GIGE Vision
  • Pre Processing

GigE Camera Modules support the Gige Vision and GENI Cam Standard. NET`s Open Camera Concept supports NET`s IP cores or Customer IP Cores for image processing. Due to the data reduction are a standard GigE transmission is sufficient.

Sensors

Matrix

IMX Series

0,4 - 12 MP | 728 x 54 – 4072 x 3046 Pixel | 30 – 280 fps | Rolling / Global Shutter

LUX1310

1,3 MP | 1280 x 1024 Pixel | 1050 fps Global Shutter

MT9 10MP

10 MP | 2592 x 1944 Pixel - 3840 x 2748 Pixel | 7 – 14 fps | Rolling Shutter

AR 0521

5 MP | 2592 x 1944 Pixel | 67 fps | Rolling Shutter

Linescan

LUX 1310

1280 Pixel | Mono | 6,6 µm | 1.000.000 fps (Luxima in Linescan Mode)

Dragster Series

2048 - 4096 Pixel | Mono | 2,5 - 7 µm | 33.000 - 66.000 fps

Matrix Sensor in Linescan Mode

Mono / Color
1280 – 2500 Pixel | 3,5 µm

SWIR

IMX 991

0,4 MP | 656 x 520 Pixel | 5 µm | 250 fps | Global Shutter | 400 – 1700 nm

IMX 990

1,3 MP | 1296 x 1032 Pixel | 5 µm | 130 fps | Global Shutter |400 - 1700 nm

Optics

1000 optics off the shelf
NET can offer you the most suitable lens for your application as we have an extensive range from which to choose. We want to make the choice of lens as easy as possible for you, so we are happy to offer advice based on our many years of practical experience. We pay attention to every relevant parameter, including your individual needs.

Custom / Semi-Custom Lenses
If there is no off-the-shelf product for your specific application, we can provide semi-custom and fully custom solutions. Just ask for it!

Fixed focal C-mount lenses

  • Huge variety of focal lengths, sensor formats, aperture sizes and resolutions

Sepcial features available:

  • Ruggedized
  • SWIR and VIS-SWIR corrected
  • Waterproof
  • Semi-custom and custom options

Telecentric lenses

  • Wide magnification range from up to 8x with FoV Φ1.0mm to down to 0.029x with FoV Φ275mm
  • Working distances from 40 mm to 700 mm

Special features available:

  • Coaxial illumination
  • Ultra high resolution
  • SWIR
  • Special coatings
  • Semi-custom and custom options

Board level lenses (M12 / S-Mount)

  • Various specialities like low distortion, wide angle, fisheye, low light sensitive, pinhole and megapixel lenses

Special features available:

  • IR-coating
  • IR-cut filter
  • Waterproof lenses
  • Small lenses (M8, M7, M6...)

Macro lenses

  • Lenses with fixed and variable magnification, low distortion / distortionless lenses, various focal distances

Special features available:

  • Large sensor support
  • Ultra compact models
  • Ruggedized versions
  • Hexagon mounts

Line Scan / Large Format Lenses

  • Up to 90mm image diagonal / 16k line sensors
  • V-mount for easy assembly
  • Various mount adapters available (M42, TFL, M58, …)

Zoom lenses

  • Manual and motorized zoom lenses with high zoom ratios for wide magnification ranges from 0.1x to 10x

iam

Integrated Smart Vision System

iam sets new standards for vision-based self-sufficient decision-making and control processes. The system architecture featuring CPU and FPGA on one chip allows for better, more efficient system performance. iam uses the available options for hardware acceleration on the system-on-chip design. Its additional FPGA resources enable high-performance neural networks and conventional algorithms to be used more efficiently for image processing.

The refined Open Camera Concept offers users unique benefits. iam also enables users with no VHDL expertise to use the FPGA resources for their own vision solutions. They can also use commercial libraries, OpenCV or NET functions.

A wide range of configuration options are available for the many different requirements: CMOS image sensors with different resolutions, shapes and high image transfer rates, as well as a variety of interface options for integration into various system environments. iam supports companies on their way to the industrial Internet of Things (IIoT).

Highlights

Complete embedded vision system

Compact form factor

SoC architecture makes the Open Camera Concept easy to use

Machine Learning ready

Variety of image sensors

Customizable vision system

Download Productinformation

Corsight

Decentralized smart vision solution

Independent decisions and actions in real-time are the core competencies of Corsight. The integrated smart vision solution comes with matrix and line scan image sensors.

Users can freely combine the resources of the Intel Quad Core CPU and the FPGA for hardware acceleration. This means that machines can achieve the fastest cycle times with real-time image processing. Available potential is tapped to its full extent.

Corsight offers the NET Open Camera Concept for the customer-specific configuration of the software solution. The customer`s trusted vision expertise in the form of IP cores and image processing software can be easily integrated into Corsight. Drivers for various license-free, licensable and proprietary libraries are available. The NET library also contains innovative algorithms for image enhancement and optimization.

We also offer a wide range of interface options for integrating the IP67-certified smart vision solution. There are no limits to your creativity when it comes to integrating and controlling Corsight: this can be by remote access via WLAN or Bluetooth, synchronization with other devices (PLC), as the host system of a multi-camera system for GigE vision cameras or, optionally, even as a GigE vision camera. Corsight opens up new possibilities for your application!

Highlights

Smart line scan vision solution

Powerful and fast

Customizable image processing

Rugged IP67 housing

Open to various libraries

Ease-of-integration

Download Productinformation

GigEPRO

Compact vision solution with IP Cores

GigEPRO stands for real-time image processing with the footprint of a compact GigE vision camera. By processing algorithms in the camera, the amount of transferred data can be considerably reduced. This means that applications are virtually unaffected by bandwidth restrictions. Vison solutions featuring camera-integrated image processing at sensor speed offer major advantages in terms of efficiency as opposed to conventional PC-based vision architectures. With GigEPRO, multi-camera systems that have already reached their technological or economic limits are now a tangible – and scalable – reality.

The ability to integrate their own algorithms means that customers are free to program their trusted vision know-how into the camera themselves. This is the idea behind the NET Open Camera Concept: the camera becomes a unique vision solution presenting new competitive advantages. Customers also benefit from IP core copy protection in the FPGA.

Highlights

NET Open Camera Concept

GigE Vision and GenICam compliant

Award-winning image processing

One cable for all

3D laser triangulation

Various image sensors

Download Productinformation

Factory Automation

A cold rolling mill is part of the steelmaking process. It is used to roll steel ingots into thin sheets. The cold rolling mill consists of a series of rolls that are pressed onto the metal under pressure. This process compresses the molecules of the metal and reduces the thickness of the metal. The compression makes the metal stronger and it can then be cut into various shapes and sizes.

Requirements:
The environmental conditions require a robust embedded system with an IP67 protection class. Due to the different material thicknesses and fluctuating height positions, the height position of the measuring edge must be included in the calculation. The corresponding measurement data is transferred via network. The control data for a web guiding system is transferred via a Profi Net interface in order to additionally realize a web guiding system.

Solution:
Two identical embedded systems operate in stereo mode. Due to a sophisticated exposure control, it is possible to work in reflection mode. Depending on the bandwidth, the system can be scaled. Connected with Ethernet, the cameras work independently and transfer the measured values to the master system. With an appropriate calibration template and routine, the system is calibrated on the machine over the entire width in 3D space.

Optics:
C-mount fixed focal length

Sensors:
Onsemi AR 0521 5MP 1/2.5" | Mono | Rolling Shutter | 2592 x 64 Pixel | ROI 800fps
Sensor Board:
MIPI
SOM Board:
Xilinx Ultrascale + SOC | ZU2 FPGA | 16Bit eMMC
Carrier Board:
24V Power Supply | 2 x Ethernet | Serial Interface
Operating System:
Pentalinux | Vivado
Software Tools:
R2Vision IP Core | Open CV Library | Profi Net IP Stack

Agriculture

For pruning various plants, such as shrubs, there is a cutting unit on an agricultural machine (tractor). For pruning it is important to distinguish the trunk and the branches from the twigs. The cutting tool must be guided so that only the branches are pruned. The trunk and branches should remain intact.

Requirements:
Through an embedded vision system based image processing system, the positions of the trunk, branches and twigs are detected differentially. This is not possible with traditional image processing. Therefore, it is necessary to perform segmentation based on neural networks. The position, i.e. the segmentation of the image content must be performed with a frequency of at least 5Hz. The complete application including the AI should run on the embedded vision system and be integrated into the agricultural machine via CAN bus.

Solution:
To train the network for segmentation with the Tensor Flow Framework, corresponding reference images were captured and labeled under the different conditions during one season. The embedded vision system "iam" is equipped with a DPU (Digital Processing Unit) on the FPGA to achieve the required working speed of min. 5Hz. The custom carrier board was equipped with the fieldbus CAN and a 12V power supply. The complete application is based on Open CV.

Optics:
Endocentric | 16mm Visable / IR Cut

Sensors:
Sony IMX 264 3MP 1/1,8“ | Color | Global Shutter
Sensor Board:
MIPI
Processor Board:
Xilinx Ultrascale + SOC | ZU5 FPGA | DPU AI-Accelerator

Carrier Board:
24V Power Supply | Ethernet | Feldbus CAN

Operating System:
Pentalinux | Vivado
Software Tools:
R2Vision IP Core | Open CV Library | Custom Application

Liquid Packaging

Requirements:

  • Optical inspection of bottles and containers (beverage crates) in beverage filling lines.
  • Label checks (present / not present), position, plausibility, barcode, date detection.
  • Fill level checks of various containers- Crate and bottle detection as well as sorting of empties
  • Contamination and damage of bottles and containers

Solution:
Our GigEPRO Cameras enable uninterrupted quality control in real time by assigning external measured values (bottle position) to the images they take of the individual bottles, and transmitting this information to a central computer (host PC) in real time. The basis for this is NET's Open Camera Concept. This enabled Syscona to integrate its own real-time algorithms into the camera.

Optics:
C-mount fixed focal length

Sensors:
Sony IMX 264 5MP 1/2“ | Mono | Global Shutter 2592
SOM Board:
Xilinx Spartan7 FPGA
Carrier Board:
24V Power Supply | Ethernet | Digital In/Out
Software Tools:
IP Core | OCC Image Coding

Sports & Entertainment

At finish lines for various sports, a continuous image is recorded via line scan technology and later analyzed for a photo finish. In addition, the embedded vision system is synchronized with a time generator. The system is also intended to record autonomously and operate on a 12V supply in battery mode, independent of any network or power supply problems.

Requirements:
An image processing system based on an embedded vision system is operated with an IMX 252 matrix sensor in line scan mode. In matrix mode an exact setup of the system is possible. In the later line mode, the lines are assembled into an infinite image at more than 5000 Hz. In addition, a fast storage of the image recording as well as a serial communication with a timing system is required.

Solution:
The Sony IMX 252 sensor is operated alternately in matrix mode and in line scan mode. For this, a frame is already generated in the FPGA from a defined number of lines. The video recording must be stored 100% uncompressed. The 64Gbit eMMC can store up to 100MB per second for later analysis. The complete application is based on Open CV.

Optics:
Zoom

Sensors:
Sony IMX 252 3MP 1/1,8“ | Color | Global Shutter
Sensor Board:
LVDS
Processor Board:
Xilinx Ultrascale + SOC | ZU2 FPGA | 64GBit eMMC

CarrierBoard:
12V Power Supply | 2 x Ethernet | Serial Interface

Operating System:
Pentalinux | Vivado
Software Tools:
R2Vision IP Core | Open CV Library | Custom Application

Surface Inspection System

A surface inspection system is an automated system that inspects the surface quality of materials. It uses a combination of optical sensors to detect surface defects in real time. The sensors can detect a variety of surface features such as roughness, wear, contrast and color. This information is passed to a controller that analyzes the data and makes a decision about the quality of the surface. The results can be displayed in real time and/or stored in a database. A surface inspection system can also be combined with other inspection systems to provide a complete surface inspection.

Requirements:
Defects in the tenth of a millimeter range are detected on fast-moving objects such as a steel strip at up to 3000m/min. The high amount of image data requires fast real-time evaluation on the FPGA of each camera. This reduces the image data many times over and shifts the required high computing power to the cameras.

Solution: