Summary of API Functions
The following figure lists the API commands and shows the relationships between functional groups of commands.
Two tables below list the API functions by functional group and alphabetically. Designers should consult the referenced topics for detailed descriptions of the API functions.
Grouped By Function
Detected Cameras
Get the number of FireWire, USB or IP cameras currently connected to the bus. Will only detect IP cameras that have a valid IP address. | |
Get the number of cameras connected to the computer, including IP cameras that do not have a IP address. |
Initialize a Camera
Initialize a camera and return the camera handle | |
Initialize a camera in its entirety and obtain the camera handle for subsequent API function calls. |
Camera Identification
Get information about the specified camera | |
Returns version information about the Pixelink hardware and firmware. | |
Set the IP Address of a specified IP camera | |
Set a name for the specified camera |
Camera Features
Enumerate Features
Get the list of the features supported by the specified camera |
Get/Set Features on Camera
Get the current value of the specified feature | |
Set the value of the specified feature |
Grouped Alphabetically
API Function Name | Description |
Initialize a specific controller, and assign it to a specific camera. | |
Create a descriptor for the specified camera | |
Convert a PixeLINK data stream (.pds) file to a video file (.avi) | |
Convert either an uncompressed PixeLINK data stream (.pds) file, | |
Convert a raw frame residing in an image buffer to an image file (.bmp, .tif, .psd, .jpg) | |
Get the list of the features supported by the specified camera. | |
Returns version information about the PixeLINK hardware and firmware. | |
Get information about the specified camera | |
Returns version information about the Pixelink hardware and firmware. | |
Get a video clip and save it as a PixeLINK Data stream (.pds) intermediate file | |
Returns the current value of the camera’s ‘clock’ that it uses to timestamp images. | |
Saves an encoded (compressed) video clip to a file. | |
Returns details about the last error that occurred. | |
Get the current value of the specified feature. | |
Get the next image frame from the camera and put it in a image buffer. | |
Get the number of cameras currently connected to the bus. | |
Initialize a camera and return the camera handle | |
Initialize a camera in its entirety and obtain the camera handle for subsequent API function calls. | |
Load settings from non-volatile memory on the camera. | |
Remove the descriptor from the specified camera. | |
Reset the size of the preview window to the size of the streaming video (thereby optimising display performance) | |
Save the current settings to non-volatile memory on the camera. | |
Specify a callback function to modify the video data in the preview window or as it is translated into an end-user format. | |
Set a name for the specified camera. | |
Set the value of the specified feature | |
Set the preview window settings for the specified camera. | |
Set the current state of the preview window to be played, stopped or paused. | |
Similar to PxLSetPreviewState, but accommodates a callback function, called when certain windows based operations are performed to the preview window. | |
Set the current state of the video stream to stopped, started or paused. | |
Releases a particular controller, allowing it to be assigned to another camera. | |
Uninitialize the specified camera. | |
Set the update mode for the specified descriptor |
Summary of API Features
The following figure lists the API Features supported by our cameras.
Designers should consult the referenced topics for detailed descriptions of the API Features.
Grouped Alphabetically
API Feature Name | Description |
This read-only feature allows the user to query the camera to see what frame rate the camera will use while streaming image data. | |
When enabled, this feature will place an upper bound on the amount of aggregate bandwidth the camera may use for image data. | |
Brightness controls the black level in the image by applying an offset voltage to the pixels before the analog to digital conversion. | |
Extended shutter allows for a multiple slope integration to extend the dynamic range of the camera. | |
The frame interval and the required bandwidth on the communication bus are fixed by the Frame Rate value. The available frame rate range depends on the bus technology, the current video format, shutter speed, ROI and/or the video mode. | |
Flip controls the orientation of the image. The image can either be flipped horizontally or flipped vertically. | |
Controls the amount of focus being applied to the Varioptic liquid lens on the camera. You can perform a "One time Auto Focus" or set the focus manually within the range of the specific lens. You may edit the upper and lower limits of this range to help speed up your focus. | |
The gain controls the amplification of the image for the camera. | |
Gamma controls the contrast in the image and is typically used in microscopy to improve the perceived dynamic range. | |
General Purpose Output (GPO) signals are controlled by this feature. A total of 4 GPO signals can be controlled on all PL-B board level cameras and 2 GPO signals can be controlled on all USB3 cameras. | |
The Lookup Table (LUT) typically has a number of 2-byte entries that range in value from 0 to 1023 (10-bit depth) or 0 to 4095 (12-bit depth). The LUT is used to implement the Gamma Feature but it can also be used to implement any LUT transfer function required. | |
This feature allows an application to determine the maximum number of bits the camera can use to represent a pre-formatted (or raw) pixel value. | |
This feature allows an application to control the maximum packet size that the software can receive. | |
The memory channel feature stores all camera parameters into non-volatile memory. It is similar to camera configuration files used in frame grabbers and software packages such as Labview. But here the file resides with the camera, not the host PC. | |
The pixel addressing feature reduces the number of pixels that are read from the ROI. Pixel Addressing is controlled by two parameters - a Pixel Addressing mode and a value. | |
The Pixel Format refers to the output formatted pixel. In cases where the camera's raw pixel size is larger than the output, the data is truncated and the least significant bits are lost. In cases where the camera's raw pixel size is smaller than the output, the least significant bits in the output data are padded with zeros. | |
Region of Interest (ROI) is a feature of most CMOS sensors that allows only a portion of the active sensor to be selected and read out. The benefit of this is a reduction in the total number of pixels and an increase in the readout speed. Often referred to as windowing, the ROI is defined by a top and left pixel as well as a width and height. | |
Rotate controls the rotation of the image. The image can either be rotated by 90, 180 or 270 degrees in the clockwise direction. | |
Saturation controls the intensity of the hues in the image. The saturation control allows the hue to be changed from full mono to more than twice the normal. If saturation is set to 100, it has no effect. With saturation set to zero, the color camera behaves as a monochrome camera. | |
The sharpness feature is a standard convolution filter applied to the intensity (luma) channel as where is the Laplacian of the pixel luma at location (i,j). The Laplacian is implemented as a 3x3 convolution kernel filter as follows. | |
A set of control information the camera uses to calculate the SharpnessScore of an image. The SharpnessScore of the image, is returned in the SharpnessScore field of the FRAME_DESC structure that is returned with each image capture. | |
The shutter feature controls the exposure time of the sensor. Increasing the shutter integration time makes the image brighter. On CMOS sensors, increasing the shutter integration time will also increase the amount of noise in the image. | |
This feature can be used to put the camera in to (and out of) a 'special' mode of operation. | |
The Temperature feature is a read only feature that provides an indication of the temperature of the sensor chip. This is important because sensor performance is related to the temperature of the sensor. Typically, read noise can double for every 10 degree increase in temperature. In warm environments, the temperature sensor can be used to assess the effectiveness of the mounting hardware at removing heat form the camera. | |
Trigger with Controlled Lighting allows control over the manner in which rolling shutter or fast-reset shutter cameras function when using a trigger. In normal operation, a rolling shutter camera is resetting, exposing and reading out information concurrently. This results in consistent exposure times and faster frame rates. However, since the rolling shutter is only exposing a portion of the sensor at a time, this mode is not effective at stop motion imaging. | |
Trigger controls the response of the camera to an external trigger input. Trigger functionality is required for industrial and machine vision applications where the timing of the image capture is determined by external events. The trigger can operate in a number of modes which provide flexibility when interfacing the camera with external equipment. | |
White balance defines the color temperature of the light source. Calibrations are performed for 3200 K (incandescent), 5000 K (daylight 1) and 6500 K (daylight 2). The camera uses this information to select from one of a number of possible color correction matrices. Turning the White Balance off disables the color correction. | |
The White shading feature provides control over the inpidual red, green and blue channel gains so that non-standard color balance can be achieved. One-push Auto will attempt to white balance the gains (match the histogram peaks of the brightest area in each color channel) based on the image data in the current ROI. |