Key machine vision trends for 2021 and beyond
BitFlow delivers insights that reflect the evolving nature of the marketplace
Automation / Robotics
Production / Materials
Predicting trends in machine vision can be difficult in the most reliable of times, but following 2020, a year of widespread uncertainty, planning ahead has become more complicated than ever before.
Despite this uncertainty, BitFlow, a leading provider of camera Link frame grabbers, recently gathered insights that reflect the evolving nature of the marketplace to assist machine vision professionals in planning for 2021 and beyond. Notable findings include anticipated growth in 3D inspection, SWIR cameras, CoaXPress 2.0, embedded vision, and vision guided robots.
Increasing demand for higher quality inspection in different industrial verticals is driving the adoption of 3D imaging. Although having only X and Y data is sufficient for many applications, 2D vision systems have their limits. The ability of a 3D system to observe, inspect, and scrutinize objects with depth of field is gaining considerable traction in food, automotive, pharmaceutical and semiconductor applications. In 2021, expect advances in 3D imaging algorithms to improve accuracy and speed in the reconstruction of objects under inspection, valuable in such applications as the bin picking of heterogeneous objects in different orientations and stacks.
As they come down in price, system integrators are increasingly developing new inspection systems based on shortwave infrared (SWIR) line scan cameras. SWIR cameras are not new, but advances in sensor technology recently have made them more practical in applications including silicon inspection, laser beam profiling, hyperspectral imaging, chemical and plastics sensing, and medical imaging. The SWIR spectrum — wavelengths between 900 and 2500 nm — detects features not immediately obvious in visible light. It is especially effective in food inspection involving moisture detection, for example, separating frozen fruit from plastic pieces that may be mixed in, or in identifying the fat content of meat. To a frame grabber, the data coming from the camera, be it SWIR, MWIR, Visible etc., is the same. We have always been able to process this data. However in recent times, the increase in data output by these cameras has required the use of a frame grabber.
The current version of CoaXPress, CXP v2.0, doubled the cable bandwidth of the previous generation to 10 Gbps (CXP-10) and 12.5 Gbps (CXP-12) per link, and is set to become the de facto standard for high-speed machine vision interfaces. In the coming year and beyond, system integrators will embrace CoaXPress v2.0, especially CXP-12, for high-speed inspection of semiconductors, consumer electronics, automotive parts and much more. CXP v2.0 is also being rapidly adopted into aerospace and intelligent traffic monitoring since these industries have long relied upon Coaxial cable. Newer, more affordable CoaXPress frame grabbers featuring single or dual link architecture are also bringing CXP-12 to smaller applications that have previously used CameraLink, USB 2.0, and GigE Vision interfaces.
The next frontier for embedded vision will involve deep learning, made possible by more powerful FPGAs that open up opportunities to speed-up computationally intensive vision and neural algorithms on embedded devices. An embedded vision system is the integration of a board-level camera and a processing board that acts as a miniaturized computer. This “all-in-one” approach is a growing alternative to PC-based image processing solutions, especially in lean applications where small size, low power consumption and low costs are important. By integrating a new generation of embedded vision with deep learning algorithms into systems and products, organizations have access to a treasure trove of valuable data that will empower them to gain new insights, imagine new applications and improve on existing production processes.
Vision Guided Robots
The outbreak of COVID-19 has further proliferated the growth of vision guided robots (VGR) that reduce human contact in many working environments. An aging population and rising labor costs also are effectively addressed by VGRs, which can perform tasks similar to human workers. Significant momentum is behind vision-guided robots for various tasks such as appearance inspection, dimension inspection, and counting, as well as picking and positioning. To do so, the robotic industry has witnessed a transformation from 2D-vision to 3D-vision that generates much richer data in all three directions, making it ideal for complex tasks coping with diverse object shapes and orientations. Another new technology, Visual Simultaneous Localization And Mapping (VSLAM), is a method of autonomously navigating an unknown environment by mapping the surrounding area while finding the vehicle’s location simultaneously. These methods rely on cameras capturing massive amounts of data that require high-speed interfaces such as CoaXPress and PCIe-based frame grabbers to transfer the image data to the host PC.