Tracker Phase 2 upgrade, IIHE(ULB/VUB) Pattern Recognition Working Group

* Development of pattern & optical recognition procedures and tools for smooth, efficient and foolproof gluing during modules production!

Weekly meetings: bi-weekly (every even week) on Fridays at 13:00 CEST, Zoom only.


Pattern & optical recognition:

Assigned to Due date Description State Closed Notify  
Main.?   Commission the current camera to take sample pictures     edit

Assigned to Due date Description State Closed Notify  
AlexanderMorton   Select a new camera   edit

Assigned to Due date Description State Closed Notify  
AlexanderMorton   Kapton placement     edit

Assigned to Due date Description State Closed Notify  
AlexanderMorton   Jig placement     edit

Assigned to Due date Description State Closed Notify  
Main.?   Glue dispensing check     edit

Assigned to Due date Description State Closed Notify  
Main.?   After cure glue spillage check     edit

Assigned to Due date Description State Closed Notify  
Main.?   After cure kapton placement check     edit

Assigned to Due date Description State Closed Notify  
Main.?   Insertion of the results in a data base     edit



- Started integrating lensfun library into Pattern Recognition repository using ( * Our camera lenses are not part of the supported list of lenses -> need to calibrate ours. * Worked through the various listed tutorials to properly calibrate our camera. * Begun integrating lensfunpy library into the PatternReco software repository.

- Discussion re aims:

* Yannick + Tahys: 1-2 prototypes available for testing from 14th June 2021 for laser sensor tests – aim to be ready in circa 2 weeks for being able to conduct tests.

* Inna raised queries re. jig position calibration procedures as it is currently time consuming as it probes three positions rather than conducting a position.

** How/what to do to calibrate a 3D jig from a 2D picture/image?

** Yannick: Find pixel on vertical axis with respect to the needle.

** Tahys: Calibrate needle with respect to the jig, accounting for bend of the needle:

* Touchscreen? Detecting edges?

* Something fixed on the table instead of a touchscreen?

* Senne: Question – is probing slow?

** Discussion re. use of sensor vs speed?

** Sensor is faster globally but introduces an additional point of failure.

** Use both?

* Get benefits of high speed from the sensor.

* Probing provides a fall back solution if the sensor fails.

- Aims: * Discovery program with needle to find edges * Laser + PatternReco to find edge * Use jig as it stays in place for all three measurements. * Find edge (x,y) for jig (square angles as needle needs 90 degree angles) * Senne to provide picture/drawing of sensor jig for sw R&D and slides (Inna expressed preference for Kapton transfer jig) explaining the jig location method used currently with the probing method. * Challenge code to be written.

Next meeting scheduled for 18th June 2021 at 1300 CEST


- Alexander gave recap of software status and apology for meeting hiatus due to personal mental health issues.

- Software status: software validation completed on existing static images but requires integration with broader framework now.

- Discussion about urgent priorities with respect to other components of the project:

  • Most urgent step that needs addressing is to ensure that the camera is properly calibrated.
  • Calibration requirements: basic geometric corrections (lens requires a calibration grid for this), including angle, brightness, focus, etc.
  • Plan to use “lensfun” software library to correct geometric distortion.
  • Integrate these software developments into the machine (will run on box computer/currently a laptop). Any software should run on box computer, but the laptop is pre-2017 so while there should be no problem running any software, this should be validated for completeness.
    • Senne provided laptop specs via Mattermost – these will be uploaded to the twiki.
    • Yannick noted that the box machine specs are in chat history on Mattermost – will also be uploaded to twiki.
- Noted that Systems Tests meetings have shifted from their 1100 slot (as detailed on the trello schedule) to 1000. Plan to drop convenors a line so that their weekly meetings and this off-weekly meeting does not clash in future.


- Yannick: found single board computer for machines ( Start development on the UDOO machine at some point ... - Yannick: explored using digital microscopes (originally bought discounted for personal use) to check the alignment of hybrids in the sandwich and needle probe alignment ... to be discussed with Wim and Eric. - Discussed ordering at least one, up to three, additional cameras to supplement the existing two (5 = 2x for the two gluing machines, 1x for streaming, 1x spare, 1x R&D/spare). Email drafted and ready to send once budgeting has been finalised. - Xavier: enquired about status of code (reported functional standalone but requires integrated with setup now); testing of software on the UDOO machine; and the ordering of SSD/RAM for the UDOO machine.


No meeting - cancelled due to convenor unavailability.


Discussed brightness/overexposure detection algorithm, focussing cones and code synergies with updated metrology (e.g. multithreading image acquisition and processing).

Discussed touch screen: - measurements work when it is touched - PR twiki page has 3D printed parts for touch screen calibration - check for ips of the four arms or corners of the screen through holes -> explore both appropriate options!


Camera light admittance: - Usage in dark room has shown that when the admittance ring is fully opened there is overexposure which is auto-compensated for. - Discussed a series of parameters in openCV that would be useful for characterising image brightness.

Glue Systems synergies: - Need to determine offset between camera and glue needle and to identify vertical axis of camera. This calibration would allow for all other positions to be extracted with no angular corrections. - Proposal: put an object with hole on the table with a vertical line or dot on the table surface and extract direction/angle from pixel. - Key challenge: get both circles/dots aligned physically. - Ideas: cone with (flat) white top, inverted cones within the cone; light the cones with LED ring from camera or low resistance filament at bottom of the hole. - Action: Yannick has designed 4 different cones and will print one of each for testing purposes, before work will determine which design is optiminal and should be pursued. - Ali noted that openCV has limitations for finding the centre of 2 circles, but after brainstorming, this should be overcome by applying filters/colour/masks to remove one part.


Second new camera has been installed in its new mount and Yannick will be in the dark room Thursday 24th September in the afternoon and will mount this new camera on the machine.

As the new cameras have variable light admittance which is sensitive to light, a means to optimise this is required. It is planned to develop an interactive python script that'll inform the user when the manual setting is optimal against a reference a focus grid.

Touch screen sensor - need to measure offset between needle and camera. Yannick to bring screen into the IIHE and has put documentation on the twiki.

-- Alexander David Morton - 2020-09-18


Following the successful initial tests of the "Sony IMX179” camera module (, it was decided that a second camera would be ordered so that one could be mounted on the machine in the clean room and the other used for live streaming as the new camera has 25x the resolution of the old webcam, manual focus (and variable field of view), and is a 1/3rd of the webcam's cost. The second camera arrived on 04/09/2020 and preliminary tests (i.e. confirming it works out of the box) have been done.

Using technical drawings from the manufacturer, Yannick has designed a mount for the cameras (files and instructions for printing are on The first camera is now mounted in a test stand and is in Alexander's office for Pattern Reco R&D and tests. The mount for the second camera has been printed by Ali and requires drilling before the second camera can be installed in the clean room. Installation is planned for early next week.

-- Alexander David Morton - 2020-09-04


We discussed the merits of obtaining a new higher definition camera for the purposes of developing pattern recognition software and glue placement and spills. While this would be a temporary solution, we agreed that a better camera would be required to take good pictures of first functional module as pictures of every module are very important to the subgroup given that we don’t build functional models often.

We collectively agreed that the “Sony IMX179” camera was the optimal camera given its specifications (8 megapixels, USB interface, supports Windows/Linux/Raspberry Pi/etc) and its cost and shipping times from several potential vendors.

Subject to your approval, we would like to order this camera for the Pattern Recognition subgroup. We have come two potential vendors to buy from:
the cheapest supplier we found was SOS Solutions (, at €48.95 including BTW/€40.45 excluding BTW and free same day shipping
the next cheapest was ( at €69.00 with free shipping

-- Alexander David Morton - 2020-08-21


* Yannick checked polarising filters. They remove reflections on the backside of the sensor, but not completely (reflection of the camera on the filter), works quite well for external light, which does not go through the filter twice. If to put filter on the camera and shield it - will be fine.
* Alexander tried alternative focusing algorithm:
1) convert image/video frame to greyscale
2) use canny (default parameters) to find the edges for this greyscale image
3) flood-fill the image using the edges as boundaries
4) create a mask using floodfilled image
5) rerun canny -> the sharper the image, the fewer edges found; the less sharp the image, the number of edges found is inflated compared to step (2)
6) use the Hough Transform to find straight lines from the edges
6) find intersecting lines consistent with a corner (within 0.01 radians of pi/2 or ~0.5 degrees of 90 degrees)
7) save frames with only one corner and fewer than 4 hough lines (i.e. only 2 or 3 lines)
8) compute the variance of the kernel of an edge algo (tried laplacian/canny) of the greyscale image
9) if more than one video frame with only one corner is found, chose the one with the largest variance -> this corresponds to increased sharpness
This was motivated by my previous focusing work where both the object of interest and background would be in focus and I had used the variance of an edge detection algo like Canny or Laplacian transform to find the correct focus (largest values = sharpest image). I had previously used the Laplacian transform as there were fewer free parameters (and I really dislike spending time fine-tuning parameters).
However, as this work has the object much closer to the camera, the background is always out of focus, so I decided to explore an alternative way to robustly pre-select "sharp" video frames, reduce noise contributions from the background and on the subject object, and then use the variance to filter these results. It turned out that in two of the three cases I tried, the algo didn't have to compare multiple video frames as only one frame had one corner. The other case found two, which (by visual inspection) were very similar in sharpness, and the sharper one had the greatest variance.
To help illustrate steps (2-4): Image_Pasted_at_2020-7-24_10-45.png</verbatim>
If the image is out of focus, multiple edges are found, and the floodfill and masking produces lots of edges -> effectively amplifying the number of hough lines found when out of focus, but when in focus, only a very small number are found.

* Find reference for camera objectives of WB
* Check working distance of the WB camera (checked, it is about 12 cm)
* Take pictures for jigs position calibrations

-- Inna Makarenko - 2020-07-24



* Alexander tried to run sw on old computer (Intel® Core™ i3-5005U Processor (2 cores, 15W TDP, 3M Cache, 2.00 GHz, launched Q1, 2015); 1x4GB SODIMM DDR3 Synchronous 1600 MHz), it was easier to get it working using linux distribution
* Yannick prepared new grid for checking the position accuracy of LitePlacer
* New task: compensate tilt during encapsulation

Take photos with LitePlacer:
* for tilt compensation during encapsulation
* with all jigs for position calibration
* for accuracy checks with new grid

* Deploy online processing of images, keep one image once in a while for cross checks and to save first and last sets of images

In future:
* Put permanent marks on table in order to have some references

-- Inna Makarenko - 2020-06-26


* Taking pictures during kapton, pigtails gluing and encapsulation are already included in the sequence
* Pictures for different gluing steps are available here:
* In case there are any suggestions about samples of pictures to take during gluing process, let us know, they will be included in the sequence
* Alexander is working on tidying up codes
* To check if the grid is ok for accuracy checks
* Alexander will check:
* computing power which is needed to process images
sw performance on some old laptop and raspberry pie
* There are some thoughts about single boards computers (from Yannick):
In this project, we have 2 different needs:
1) high end processing power for gluing machine, pattern recognition and such
2) low end small board to process log, alerts, small DB access, low power board

for 1) (open) drivers and software support, CSI port and linux mainline integration are key
or the Rock Pi X (should be cheaper)
it is x86 so long term software support will be fine (expensive but would be perfect for accelerated openCV)

for 2) it is low power, connectivity, small size, reliability and PCB integration
RPI is too big and not very easy to integrate in a PCB, and power hungry for instance

-- Inna Makarenko - 2020-06-12


Status update from Alexander:
* implemented base line class (and class functions) for use across scripts to reduce repetitive code/increase functionality (PR opened and accepted yesterday)
* new robust line-line distance function (written and now validated, to be included in corner-detection algorithm PR)
* corner-detection algorithm - results shown Monday, code being tidied up and some optimisation of parameters done before opening PR (will include new robust line-line distance function).

-- algorithm works by applying floodfills and masking after traditional edge detection algo (separates background from object and remove background/object details; applies either a standard Hough Transform or Probabilistic Hough Line Transform to find lines; and looks for perpendicular intersecting lines (as all corners considered are roughly at right angles)

-- Inna Makarenko - 2020-05-22


* Senne created simple GUI and incorporate it in the GLUI. New feature enables moving the camera in different directions (X, Y and Z). The inputs are the X, Y, Z coordinates and a button to move camera to this position. The purpose of this GUI is make it easier for everybody to move the camera in desire position remotely.
* List of overlapping tasks for Metrology and Pattern Recognition groups (the details and status can be found at the Metrology WG twiki page)
-Optimization of the base edge detection algorithm
-Make line filtering algorithm more robust for more use cases
-Make the function that calculates distances between edges more flexible
-Add pattern recognition algorithm to detect alignment marks
-Optimize algorithm that stops the z-stage when corner is in focus
-Add corner detection algorithm
-Integrate the xyz-stages with the pattern recognition setup
-Integrate the measurement outputs with the database framework
* Alexander will try to present first attempts of improving the existing SW (provided by Emil and Ali).

-- Inna Makarenko - 2020-05-15


* Discussion of pictures taken with the Lite Placer camera
* Discussion of possible Clean Room activity
* Discussion of further steps and tasks

* to take another set of pictures with the recent kapton strips and the FullHD camera configuration
* to define set of pictures to be taken with preset camera positions
* for future: to turn the light on in the clean room to be able to work with the camera from remote but without any camera movements
* to identify the pixel size of the current camera on the light placer (to match number of pixels with real dimensions )
-(depends on the height of the thing you are looking at)
* Yannick will provide pictures of the recent kapton strips on the jig with different filters (for demonstration purposes)

* Alexander confirmed that he will work on Pattern Recognition
* kapton is transparent enough to see the jig bars under it, but maybe some extra light has to be considered
* discussed strategy to check kapton position on the jig (different strategies for transparent and non-transparent kapton strips)
- Possible solution: outer and inner rectangles

-- Inna Makarenko - 2020-05-08


* Introduction of new group
* Discussion of open issues and tasks

* The Pattern Recognition group will tightly cooperate with the Metrology group, since a lot of tasks are similar (e.g. edge detection).

* Camera on the Lite Placer:
Ali succeeded to take pictures using the camera on the Lite Placer
* * Today he will try to take pictures of the jig at it’s proper place (during glue dispensing)
* * Also he will take different pictures of kapton strips on the kapton transferring jig
* * * placing kapton strips using l-shapes with the groove to the top
* * * another picture with the groove of one of the l-shapes to the bottom (to check if we can detect this using PR techniques)
* * * picture with one of the kapton strips obviously displaced

* Alexander will start working with pictures from Ali to check jig position and kapton placing
* Emil will share his software for image processing developed for metrology checks
* Ali will share his software for focus detection, as it will be useful for the metrology group
* Inna will contact Dima concerning putting the results of checks in the database (local)

-- Inna Makarenko - 2020-04-24


Group convener: Alexander David Morton (

Mattermost chatroom:


Color transforms (RGB => HSV, HSL) on sample images

(place your mouse on images to get caption)







Colorspace theory:

-- InnaMakarenko - 2020-04-21

  • PV2020.2.en.pdf: Call for summer internship 2020 by Yannick Allard</verbatim>
Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg IMG_3605.JPG r1 manage 3500.3 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
JPEGjpg IMG_3605_HSL_Lum.JPG r1 manage 2499.1 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
JPEGjpg IMG_3605_HSL_Saturation.JPG r1 manage 1564.6 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
JPEGjpg IMG_3605_HSV_Sat.JPG r1 manage 1522.6 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
JPEGjpg IMG_3605_HSV_Value.JPG r1 manage 2493.4 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
JPEGjpg IMG_3605_Hue.JPG r1 manage 1800.8 K 2020-05-08 - 11:07 YannickAllard Color transforms on Kapton strips
PNGpng Image_Pasted_at_2020-7-24_10-45.png r1 manage 24.4 K 2020-07-28 - 15:44 InnaMakarenko Alternative focusing algorithm
PDFpdf PV2020.2.en.pdf r1 manage 150.5 K 2020-04-21 - 20:42 InnaMakarenko Call for summer internship 2020 by Yannick Allard
Unknown file formatfcstd calibrator.FCStd r1 manage 157.6 K 2020-10-16 - 10:31 YannickAllard Calibration corners for touchscreen
Unknown file formatstl calibrator.stl r1 manage 16.9 K 2020-10-16 - 10:31 YannickAllard Calibration corners for touchscreen
Edit | Attach | Watch | Print version | History: r25 < r24 < r23 < r22 < r21 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r25 - 2021-06-10 - AlexanderMorton
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback