Pallet quality control

Why this solution?

  • Immediate decisions: OK/WARN/NOK simplifies automatic sorting or stops the line (NOKs at the right time).
  • More precise detection: class‑specific minimum/maximum dimensions and width‑to‑length ratios; the rule reduces false‑positive defects.
  • SCADA‑ready: REST JSON API (GET /scada) + Ignition SCADA support; fast responses and per‑IP rate limiting.
  • Audit & traceability: detection visualization with dimensions and scores, asterisks for too small/large, status and metadata in EXIF. Log files.
  • Flexibility: class‑specific warn/nok thresholds, min/max width/height, and SCADA size threshold (adjust detection based on measurements).
  • Industrial‑grade: IP ACL, CORS, logging, auto‑scan from folders, caching, and CPU‑based ONNX Runtime (no GPU required).

How does it work?

  • Imaging – A/B cameras take RGB JPEG images (other formats supported). Additional cameras can be added.
  • Preprocessing – smart crop & scaling; RGB to the model; we process according to engineering recommendations.
  • Inference – ONNX detection on CPU (GPU also supported). Example classes: crack, corner, debris. Classes per customer request.
  • Post‑processing & aggregation – we clean detection results, exclude per customer needs.
  • Rules:
    • Class‑specific warn/nok confidence thresholds (e.g., crack: warn 0.23, nok 0.30).
    • Min/Max width/height (in pixels), scalable by measured dimensions.
    • SCADA size threshold: longest side ≥ X% of the smallest dimension (e.g., crack 40%).
  • Width‑to‑height ratio: at least 1:2 or 1:3 – detects elongated cracks, not point noise.
  • Output – REST JSON to SCADA + overlay JPEG with EXIF; status to DB (HTTP or MySQL).

Integration (with the production line)

  • Ignition SCADA / PLC – use a simple GET /scada?fullpath=…&x=…&y=…&z=…
  • Database / MES – status is saved to an HTTP endpoint or MySQL (upsert).
  • File‑based workflow – overlays and EXIF are immediately available for audit and retrospective analysis.
  • Custom integration – we implement integration according to the customer’s specification

Technical specifications

  • Model format: ONNX; Inference: ONNX (CPU and GPU support).
  • Input: RGB JPEG, BGR→RGB via OpenCV.
  • API responses: JSON; images: JPEG (EXIF + overlay).
  • Detection classes: crack, corner, debris (addable and extendable).
  • Thresholds: CONF 0.23 (global min), class‑specific WARN/NOK; min/max W/H; crack aspect ratio min 1:2 (optionally 1:3). Configurable.
  • Security: IP ACL, rate limiting, CORS.
  • Observability: /live meta, heartbeat in logs.
  • Environment: Linux or Windows (client software runs on any device, incl. mobile)

Reduce scrap, standardize decisions, and improve throughput.

Visioline’s machine vision software detects product defects in real time and sends OK / WARN / NOK decisions directly to SCADA when needed. The system integrates with the production line via a REST API, supports on‑image annotations and EXIF metadata, and gives class‑level control over thresholds to avoid noise and reliably find real defects.

aluse kontroll

Web‑based view: detections, search, and auto‑refresh

The operator sees all cameras and bases/trays in the browser—the original and AI‑processed image are side‑by‑side in an A/B view. Detection labels and boxes are immediately visible on the image. Search is fast: filter by filename, date range, status (OK/WARN/NOK) or presence of annotations (D). The view orders the newest images first so the latest events are always at the top.

Built‑in auto‑refresh monitors today’s folder and updates the gallery automatically—as soon as a new image or metadata appears, it is shown in the view without pressing “Refresh.” This keeps attention on the line and reduces clicking.

Annotation: give the model better examples

If the AI needs additional training for more precise detection, the operator can annotate defects directly in the browser (rectangular boxes). Classes can be selected (e.g., crack, corner, debris, hole, heat, etc.), boxes can be dragged/resized, and saved to the database. Annotated images get a D indicator and can be viewed separately in statistics (and found via search).

This “human‑in‑the‑loop” workflow collects high‑quality examples used to train the next model version. If desired, we can add a confirmation step (double‑check), labeling guidelines, and export as ZIP/XML into the training pipeline—resulting in consistent labeling and continuously improving accuracy.

Extensions and integration: we tailor the software to your process

Our solution is modular and adaptable—we can do custom development to match your factory workflow.

Integrations: REST/JSON API, webhooks, CSV/Excel export, OPC‑UA/MQTT bridges if needed; connections to SCADA, MES, and ERP systems (e.g., work orders, NOK tickets, batch logs).

Users and permissions: role‑based access (operator/foreman/quality manager), SSO (AD/LDAP/Azure AD), audit logs.

Reports and analytics: period statistics, class trends, NOK reasons; connections to Power BI / Grafana views.

Rules and logic: customizable thresholds (OK/WARN/NOK), class‑specific size and shape constraints, automatic notifications.

Environment: on‑premises deployment, runs on the internal network (IP or DNS), multi‑camera support, offline‑friendly.

If you have special requests—custom UI, additional defect classes, an automatic decision tree, or a special data flow—we will build it.

Computer vision model and language model, machine learning

We trained a computer‑vision‑based quality control model (AI/ML) on real production data, focusing on defect detection in industrial conditions where lighting, background, and cameras can vary; this makes the solution more accurate than detection using a language model.

We used an SSD detector and manual labeling (rectangular boxes) to keep annotation simple and unambiguous for quality control (or the operator). The dataset was collected from multiple cameras and shifts (spanning several years); we balanced by classes and used augmentation (brightness/contrast, slight motion blur, scaling and rotations) so the model generalizes under changing conditions and recognizes defects it hasn’t seen. During training we kept validation and test sets separated by time/camera (preventing data leakage), optimized the mAP/Recall trade‑off, and used early stopping and error analysis.

In production we apply class‑specific thresholds: global confidence 0.23 and NMS IoU 0.45, plus class warn/nok levels (e.g., crack 0.23/0.30, corner 0.28/0.31, debris 0.28/0.31) and size filters that exclude too small/unrealistic hits (e.g., crack ≥ 20×20 px; corner and debris ≥ 70×70 px; crack elongation requirement). This combination reduces false positives and leaves only significant objects on the image, which in turn makes reports and decisions trustworthy.

A person supervises the system; browser‑based labeling continuously brings in new examples for the next training cycles, keeping the model self‑improving and quickly adapting to new patterns. All inference runs on‑premises / on the internal network with low latency, suitable for the production line; results are traceable and tunable (thresholds, classes, rules).

The result is a stable, fast, and explainable visual quality control system that reduces NOK risk, saves operator time, and increases production throughput—exactly what companies are looking for when they search for “quality control AI,” “computer vision in industry,” “real‑time defect detection.”

Frequently Asked Questions (FAQ)

Do we need a GPU?

No. The solution runs efficiently on CPU (and also supports GPU).

Can the rules be changed?

Yes, class‑specific thresholds (warn/nok, min/max W/H, aspect ratio, SCADA size fraction) are configurable.

How is the defect requirement “elongation,” for example, enforced?

For cracks, for example, an aspect‑ratio rule applies: min 1:2 (or 1:3 if desired). This filters out overly round/compact detections. Configurable.

How does a measurement provided in a request (e.g., from SCADA) affect the decision?

SCADA sends X/Y/Z measurements; based on these we compute size_scale and scale thresholds accordingly. Configurable.

Will an explanation be drawn onto the image as well?

Yes. Detections are drawn onto the image, showing class, confidence, and dimensions; EXIF contains status and metadata (for transparent auditing). Configurable.

Does detection also run in the browser?

Yes, depending on the solution.