Experiment Controller

Runs, controls, and displays Ceed experiments on screen, both during a “real” experiment and when previewing it.

The purpose of Ceed is to run visual-temporal experiments. Once a stage containing one or more function and one or more shape has been designed, you’re ready to run the experiment.

Following are some of the experiment configuration options:

Camera-projector-array alignment

There are three systems interacting with the tissue, and they all need to be aligned to each other; the projector, the camera, and the MEA electrode grid.

Camera-projector

The first step is to draw any unique shape in Ceed and project this pattern on the MEA plane and then capture the projected pattern using the camera. Then in the GUI scale and align the captured image to the original shape. This will give us the camera to projector ViewControllerBase.cam_transform matrix.

With the camera aligned to the projector output, you can draw shapes and target specific regions of the slice, visually (from a broad field stimulation camera capture) and it will be projected at the correct place on the tissue. If there’s mirroring, making affine alignment impossible you can either ViewControllerBase.flip_camera or ViewControllerBase.flip_projector horizontally. These settings are also exposed in the GUI.

Camera-array

With the camera aligned to the projector, we just need to align the MEA grid to the camera. First take a camera picture of the grid - you should be able to see the 2D grid of electrode termination points. Then, in the GUI display the virtual grid defined by ViewControllerBase.mea_num_rows, ViewControllerBase.mea_num_cols, ViewControllerBase.mea_pitch, ViewControllerBase.mea_diameter, and ViewControllerBase.mirror_mea and manually align it to the image. This will generate the ViewControllerBase.mea_transform.

With the array aligned to the image and the projector aligned to the image we can now know exactly the electrodes on which the drawn shapes will cover and we can relate the activity of those cells to the stimulation.

Video mode

The projector supports 120 (119.96 more accurately) frames per second at its full resolution of 1920x1080 pixels, but it also offers higher speed modes.

It can split the GPU image into 4 quadrants, such that it renders 4 960x540 pixel images for each overall frame. So at the cost of a half of the x, y resolution we can play at 480Hz.

Normally each image has red, green, and blue channels. By instead outputting a grayscale image, we can use each of the three channels to further multiply our speed by three to render 960x540 grayscale at 1,440 Hz.

The video mode is configurable with ViewControllerBase.video_mode and from the GUI. Ceed will automatically correctly render the images for each mode when it is selected.

LED mode

The projector has three LEDs; red, green, and blue. In Ceed you can draw shapes and select their color(s). Internally, the projector will uses its LEDs to display the image with the requested colors, like a normal projector.

However, you can manually turn OFF each of these LEDs and then that color will be displayed. ViewControllerBase.LED_mode_idle and ViewControllerBase.LED_mode configure which LEDs are active outside and during an experiment, respectively.

Typically you’d select 'none' for ViewControllerBase.LED_mode_idle so that the projector is OFF outside the experiment. This way you don’t stimulate the tissue outside the experiment. During the experiment you can either rely on the color selected for each stage, turn OFF specific LEDs or use the optical filters to filter out unwanted color channels.

Frame rate and dropped frames

Frame time

The projector and GPU display frames at a specific ViewControllerBase.frame_rate. In Ceed (GUI) you must enter the exact GPU frame rate, otherwise Ceed will project the stages at an incorrect rate. The frame rate will be internally converted to a fraction (ViewControllerBase.frame_rate_numerator, ViewControllerBase.frame_rate_denominator) that will be used to clock the functions (see below).

Normally, the GPU limits us to the frame rate so we don’t have to estimate from software when to display the next frame, because we immediately display the next frame when the GPU returns control to the CPU. However, if it’s not available, ViewControllerBase.use_software_frame_rate can be used to force the frame rate. Although it’s very unreliable and should not be used during an experiment.

Long frames

In an ideal system, every frame is displayed for exactly the duration of the period of the frame_rate before displaying the next frame. In this system the time of each frame is 0, 1 * period,, 2 * period, …, n * period. Since the period is an exact fraction, the current time can be expressed as an exact fraction and when passed to a stage’s function it can accurately determine when each function is done.

In a real system, some frames may be displayed for more than one frame duration. This could happen if the CPU is too slow then the current frame is e.g. displayed for 2 or more frames before the next frame is shown. If this is not accounted for, all subsequent frames are temporally displaced by the number of long frames.

For example, say the frame rate and period is exactly 1 second. Normally, we’d display frames at 0s, 1s, … ns, and use that time when computing the functions for each frame (i.e. multiplying the frame count by the period to get the time). Naively, if we display frame 0 at 0s, 1 at 1s, 2 at 2s. But then frame 2 actually goes long and is displayed for 2 seconds. Because we’re counting frames, the next frame time will be computed as frame 3 at 3s. However, in real time, because frame 2 was two frames long the actual frame 3 time is 4s when frame 3 is displayed. So all frames are delayed.

Dropping frames

To fix this, frame 3 should be dropped and we should instead display frame 4 next. Or more generally, we need to drop frames until the frame number times the period catches up with the real time.

Ceed has two approaches to detecting when to drop frames; a software approach and a hardware approach. The software approach uses the time after rendering frames and a median filter for FrameEstimation. With default settings it may take a few frames before we correct the delay.

We also have a hardware solution using a Teensy device for TeensyFrameEstimation. This device watches for dropped frames and notifies us over USB when it happens. This lets us respond more quickly.

The hardware device can be turned OFF with TeensyFrameEstimation.use_teensy, which is also configurable in the GUI. If disabled, we fall back to the software approach, unless it’s completely disabled with ViewControllerBase.skip_estimated_missed_frames.

Note

For any dropped frames, we still pass the time to the stage and it generates the shape values for these frames, but the frames are just dropped.

This ensures that the root stage receives contiguous times at the given frame rate without any jumps, rather, we just don’t draw the frames that need to be skipped.

Experiment flow

Following is the overall experiment flow and what Ceed does when running an experiment.

Experiment mode

There are two modes under which the experiment runs; in preview mode or as a “real” experiment. In preview mode, the stage is run directly in the Ceed GUI in the drawing area but is not projected on the projector (unless the LED_mode_idle is not none). All the code is executed within the main Ceed process and the controller used to control the experiment is a ControllerSideViewControllerBase instance.

In a “real” experiment, the user launches a second full screen window from within Ceed. This starts a new process that communicates with Ceed over a queue and runs the experiment in that fullscreen window. In this case, the controller in the second process is a ViewSideViewControllerBase instance, but the main GUI still has its ControllerSideViewControllerBase instance through it communicates with the ViewSideViewControllerBase instance. Data is constantly sent between the two processes, specifically, the second process is initialized with the config at the start. Then, once the playing starts, the client (second process) continuously sends data back to the main Ceed GUI for processing and storage.

Experiment control

When running from the full-screen window, you can control the experiment using keyboard shortcuts. Specifically, the following shortcuts:

  • ctrl-s starts an experiment using the currently selected stage (selected in the Ceed GUI).

  • ctrl-c stops the currently running experiment.

  • ctrl-z stops the currently running experiment (if any) and closes the fullscreen window.

  • ctrl-f toggles the second window between fullscreen and normal. This should not really be used.

If previewing, you can start or stop the stage using the play button in the GUI.

Preparing stage

When the user starts the experiment, starting with the stage selected by the user, Ceed copies the stage into a new stage named last_experiment_stage_name. If one already exists with that name, it is replaced. This is the stage that will be run and the name of the stage you should look up in the analysis stage.

Given the stage, it samples all the randomized function parameters, it expands all the reference stages and functions, and it re-samples the function parameters not marked as lock_after_forked. See copy_and_resample().

Preparing hardware

Next, it prepares a new section in the data file for this experiment (see prepare_experiment()). It then set the video mode (video_mode) and LED state (LED_mode) to the requested state and it is ready to run the stage.

If it’s running for “real” and not being previewed, Ceed tell the second process to start the stage in the second full-screen window. It also starts communication with the TeensyFrameEstimation if it’s being used. Now, it sets up all the graphics and everything it needs to run.

Warmup

When starting the stage, the stage will pre-compute the intensity values for all its frames if enabled (Pre-computing). For a long experiment, this may take some time during which the GUI won’t update. If the Teensy is running, the Teensy’s LED will blink faster than normal until the pre-computing and warmup is done and the stage actually starts playing frames when it will blink even faster until the stage is done.

When pre-computing is done, Ceed will run 50 blank frames. This gives sufficient time to make sure LED_mode is updated. It also allows us to collect the rendering time of these warmup frames, which will then be used to initialize FrameEstimation to help us estimate when to drop frames as a fallback if Teensy is not available.

Running stage

To run the stage, Ceed will sample the stage’s function at integer multiples of the period of the GPU refresh rate (frame_rate or rather effective_frame_rate if any of the quad modes is ON). Specifically, Ceed counts GPU frames and increments the counter by one for each frame (or sub-frame if the quad mode is ON). So to compute the current frame time, it divides count by effective_frame_rate.

Hence frame_rate must be exactly the GPU refresh rate. E.g. if the GPU is updating at 119.96 (which is typical), the frame rate must be set to 119.96, and not 120.

At each frame or sub-frame, Ceed ticks the stage with the current frame time. This causes it to update all the shapes to the stage’s function intensity and for the new frame to be rendered.

As the experiment is running Ceed also logs all the shape data. It stores for each frame the intensity of all shapes as well as some minimal debug data about frame time. More debug data can be logged by turning ON ViewControllerBase.log_debug_timing. It also logs the corner pixel Ceed-MCS alignment pattern for each frame, to be used for later alignment (see Synchronization).

Experiment shutdown

When the experiment finishes or is stopped, Ceed will save the last camera image just before the experiment stops ( last_cam_image), if the camera was playing. Then it stops the stage (if it didn’t naturally stop) and cleans up.

Synchronization

To facilitate temporal data alignment between the Ceed data (each projected frame) and MCS (the electrode data), Ceed outputs a bit pattern in the top-left corner pixel for each frame. This pixel is output by the projector controller as a bit pattern on its digital port, and is recorded by MCS. It is turned ON just before the experiment starts when running a “real” experiment using ControllerSideViewControllerBase.set_pixel_mode().

Specifically, the corner pixel contains 24-bits (8 for red, green, and blue). Ceed sends synchronization data in this pixel, so that after an experiment Ceed can look at its frame and the MCS data that logged the pattern and it can figure out exactly the electrode samples that corresponds to each projected frame.

See DataSerializerBase to see the details about this pattern. See also Temporal alignment protocol to see how it used used to merge the data after an experiment.

class ceed.view.controller.FrameEstimation

Bases: object

A window-based running-median estimator.

Starting from the first frame, you pass it (add_frame()) the time just after each frame is rendered. With that, it estimates the time the first frame was rendered by estimating the whole number of frames passed since first_frame_time, rounding, and then back-computing the first frame time from the current frame time, the count, and the GPU period.

So, each frame gives us an estimate of when the first frame was rendered. Next, we keep a history of this estimate from the last 100 frames and its median is the best estimate for the actual first frame render time.

Next, given the best estimate of the first frame render time and the period, we compute how many frames have passed and round to whole frames. We record this number for the last n (skip_detection_smoothing_n_frames) frames in the circular render_times buffer. Our assumption is that starting from the first of the n frames until the nth frame, we should have rendered n - 1 frames.

Averaging this over the n frames, so that we are less sensitive to individual frame jitter, we get the best estimate of how many frames we should have rendered by now, given the start time and the period. Additionally, globally, we keep count of the total number of frame actually submitted to the GPU and rendered. If our estimate for the number of frames we should have rendered is larger than the number of actual rendered, we know that some frame took to long to render and we need to drop one or more frames to compensate.

add_frame() returns now many frames need to be dropped to catch up.

Before the first real frame, we do some frame warmup and initialize render_times with reset().

min_heap: List = []

The min heap used to track the median.

max_heap: List = []

The max heap used to track the median.

history: List = []

A circular buffer of 100 items that tracks the estimate of the time that the first frame was rendered, using the last 100 frames.

count is the index in history of the oldest timestamp (i.e. the next one to be overwritten).

count: int = 0

Index in history.

frame_rate: float = 0

The GPU frame rate.

render_times: List[float] = []

A circular buffer of skip_detection_smoothing_n_frames items that tracks the estimate of how many frames should have been rendered, using the last skip_detection_smoothing_n_frames frames.

last_render_times_i is the index in render_times of the oldest estimate (i.e. the next one to be overwritten).

skip_detection_smoothing_n_frames: int = 4

How many frames ot average to detect when a frame needs to be skipped.

See class description.

smoothing_frame_growth: float = 0.0

When averaging render_times, we subtract smoothing_frame_growth, which is the average over range(n), which is the expected number of frames to added over the last skip_detection_smoothing_n_frames frames.

If the remainder is not zero, it is the number of frames to be dropped.

first_frame_time: float = 0.0

The best current estimate of the time that the first experiment frame was rendered.

reset(frame_rate: float, render_times: List[float]) None

Resets the instance and initializes it to the render times from the warm up frames.

update_first_render_time(render_time: float) None

Adds the frame render time to the running-median history and updates first_frame_time with the new best estimate.

add_frame(render_time: float, count: int, n_sub_frames: int) int

Estimates number of missed frames during experiment, given the render time of the last frame and the total frames sent to the GPU.

n_sub_frames is the number of sub-frames included in count, e.g. in quad mode.

Can only be called after it is initialized with warmup frames in reset().

class ceed.view.controller.TeensyFrameEstimation

Bases: kivy._event.EventDispatcher

Alternatively to FrameEstimation, we can estimate when the GPU rendered a frame for too long and frame needs to be dropped using the attached Teensy microcontroller.

This microcontroller watches the clock bit in the 24-bit corner pixel that is described in DataSerializerBase. Then, if a frame change is not seen after 1 / 119.96 seconds after the last clock change, we know the frame is going long and we’ll need to drop a frame.

This information is communicated over the USB and this class, in the main process but in a second thread, continuously reads the USB. When it indicates that a frame needs to be skipped, it updates the shared_value that is seen by the second Ceed process that runs the experiment and that drops the required number of frames.

The Teensy can and is only used during an actual experiment when Ceed is run from a second process, because otherwise the corner pixel is not visible, and the GPU doesn’t match the frame rate anyway.

usb_vendor_id: int = 5824

The Teensy vendor ID. This is how we find the attached Teensy on the bus. If there’s more than one, this needs to be modified.

usb_product_id: int = 1158

The Teensy product ID. This is how we find the attached Teensy on the bus. If there’s more than one, this needs to be modified.

use_teensy

Whether to use the Teensy.

If it’s not attached, set this to False. When False, it falls back on FrameEstimation.

is_available = False

Indicates whether the Teensy is available and found.

If use_teensy, but not is_available, then we don’t do any frame adjustment.

usb_device: Optional[usb.core.Device] = None

The USB device handle.

endpoint_out: Optional[usb.core.Endpoint] = None

The output endpoint of the USB that we use to send messages to the Teensy.

endpoint_in: Optional[usb.core.Endpoint] = None

The input endpoint of the USB that we use to read messages from the Teensy.

shared_value: multiprocessing.context.BaseContext.Value = None

A values shared between the main process (that updates this value based on the Teensy and indicates the number of frames skipped) and the second experiment process that uses this value to drop frames based on how many the Teensy thinks were skipped.

configure_device()

Configures the Teensy.

This is called by the main Ceed process before the second process is started and opens the device in the main process.

release_device()

Releases a previously configured Teensy.

This is called by the main Ceed process after the second process is stopped and closes the device.

start_estimation(frame_rate)

For each experiment, it notifies the Teensy that a new experiment started so that it starts counting skipped frames once it sees the first clock toggle in the corner pixel.

Called by the main Ceed process and it starts a new thread and continuously reads from the Teensy and correspondingly updates shared_value.

stop_estimation()

After each experiment it notifies the Teensy that the experiment ended so it goes back to waiting for the next experiment notification.

Called by the main Ceed process and it also stops the second thread started by start_estimation().

class ceed.view.controller.ViewControllerBase(**kwargs)

Bases: kivy._event.EventDispatcher

Base class for running a Ceed experiment and visualizing the output of a ceed.stage on the projector (full-screen) or during preview.

There are two sub-classes; ControllerSideViewControllerBase for playing the experiment when it is previewed in the Ceed GUI and ViewSideViewControllerBase for playing the experiment in the second Ceed process when it is played “for real”.

Additionally, ControllerSideViewControllerBase is used to control the experiment from within the main Ceed process in each case. A base class for visualizing the output of a ceed.stage on the projector or to preview it in the main GUI.

Events
on_changed:

Triggered whenever a configuration option of the class is changed.

screen_width: int

The screen width in pixels on which the data is played. This is the full-screen width.

flip_projector

Whether to flip the projector output horizontally, around the center. See also Camera-projector-array alignment.

flip_camera

Whether to flip the camera images horizontally, around the center. See also Camera-projector-array alignment.

screen_height: int

The screen height in pixels on which the data is played. This is the full-screen height.

screen_offset_x: int

When there are multiple monitors, the monitor on which the experiment is shown in full-screen mode is controlled by the x-position of the displayed window.

E.g. to show it on the right monitor of two monitors each 1920 pixel wide, and with the main monitor being on the left. Then the screen_offset_x should be set to 1920.

frame_rate

The frame-rate of the GPU that plays the experiment.

This should be set to the exact refresh rate of the GPU, as can be found in e.g. the nvidia control panel. Otherwise, the experiment will be out of sync and played incorrectly.

This is internally converted to a fraction (frame_rate_numerator, frame_rate_denominator), so the number must be such that it can be converted to a fraction. E.g. 119.96 or 59.94 can be represented correctly as fractions.

frame_rate_numerator: int

The numerator of the frame_rate fraction.

frame_rate_denominator: int

The denominator of the frame_rate fraction.

use_software_frame_rate

Depending on the CPU/GPU, the software may be unable to render faster than the GPU refresh rate. In that case, the GPU limits us to the GPU frame rate and frame_rate should be set to match the GPU refresh rate and this should be False.

If the GPU isn’t forcing a frame rate. Then this should be True and frame_rate should be the desired frame rate. That will restrict us the given frame rate. However, the actual frame rate will be wildly inaccurate in this mode, so it’s only useful for testing.

One can tell whether the GPU is forcing a frame rate by setting frame_rate to a large value and setting use_software_frame_rate to False and seeing what the resultant frame rate is. If it isn’t capped at some value constant, e.g. 120Hz, it means that the GPU isn’t forcing a rate.

log_debug_timing

Whether to log the times that frames are drawn and rendered to a debug section in the h5 file.

If True, this will additionally be logged for each displayed frame in a special section in the file.

skip_estimated_missed_frames

Whether to drop frames to compensate when we detect that a previous frame was displayed for longer than a single GPU frame duration. Then, we may want to drop an equivalent number of frames, rather than displaying all the subsequent frames at a delay.

See FrameEstimation and TeensyFrameEstimation for how we detect these long frames. Use TeensyFrameEstimation.use_teensy to control which estimator is used.

cam_transform

A 4x4 matrix that controls the rotation, offset, and scaling of the camera images relative to the projector.

In the Ceed GUI, a user can transform the camera image, in addition to flip_camera until it fully aligns with the projector output. See also Camera-projector-array alignment.

mea_transform

A 4x4 matrix that controls the rotation, offset, and scaling of the mea array relative to the camera.

This is a grid that corresponds to the electrodes in the electrode array. In the Ceed GUI, a user can transform this grid, in addition to mirror_mea until it fully aligns with a camera image of the grid from the actual array.

See also Camera-projector-array alignment and the other mea_ properties of this class.

mirror_mea

When True, the MEA grid is mirrored vertically around the center. See mea_transform also.

mea_num_rows

The number of electrode rows in the array. See mea_transform also.

mea_num_cols

The number of electrode columns in the array. See mea_transform also.

mea_pitch

The distance in pixels, center-to-center, between neighboring rows/columns. It is assumed that it is the same for columns and rows.

See mea_transform also.

mea_diameter

The diameter in pixels of the displayed electrode circles in the grid.

See mea_transform also.

pad_to_stage_handshake

Ad described in DataSerializerBase, Ceed sends handshaking data to the MCS system at the start of each experiment. This helps us align the Ceed and MCS data afterwards. If the root stage of the experiment is too short, it’s possible the full handshake would not be sent, preventing alignment afterwards.

If pad_to_stage_handshake, then the root stage will be padded so it goes for the minimum number of clock frames required to finish the handshake, if it’s too short. The shapes will be black for those padded frames.

output_count

Whether the corner pixel is used to output frame information on the PROPixx controller IO pot as described in DataSerializerBase.

If True, ceed.storage.controller.DataSerializerBase is used to set the 24 bits of the corner pixel. Otherwise, that pixel is treated like the other normal pixels.

fullscreen

Whether the second Ceed window that runs the “real experiment” is run in fullscreen mode.

In fullscreen mode the window has no borders and takes over the whole screen.

stage_active

True when an experiment is being played. Read-only.

cpu_fps

The estimated CPU frames-per-second of the window playing the experiment.

gpu_fps

The estimated GPU frames-per-second of the window playing the experiment.

video_modes = ['RGB', 'QUAD4X', 'QUAD12X']

The video modes that the PROPixx projector can be set to.

See also Video mode.

led_modes = {'B': 3, 'G': 5, 'GB': 1, 'R': 6, 'RB': 2, 'RG': 4, 'RGB': 0, 'none': 7}

The color modes the PROPixx projector can be set to.

It determines which of the RGB LEDs are turned OFF. E.g. "RG" means that the blue LED is OFF.

video_mode: str

The current video mode from among the video_modes.

See also Video mode.

LED_mode

The LED mode the projector will be set to during the experiment.

Its value is from the led_modes.

LED_mode_idle

The LED mode the projector will be set to before/after the experiment. This is used to turn OFF the projector LEDs in between experiments so that light is not projected on the tissue while stages are designed.

Its value is from the led_modes.

do_quad_mode

Whether the video mode is one of the quad modes. Read-only.

current_red

The current to use for the projector red LED.

Its value is between 0 to 43 amps.

current_green

The current to use for the projector green LED.

Its value is between 0 to 43 amps.

current_blue

The current to use for the projector blue LED.

Its value is between 0 to 43 amps.

pre_compute_stages: bool

Whether the stage run by the experiment should be pre-computed. See Pre-computing for details.

canvas_name = 'view_controller'

Name used for the Kivy canvas to which we add the experiment’s graphics instructions.

current_canvas = None

The last canvas used on which the experiment’s shapes, graphics, and color instructions was added.

tick_event = None

The kivy clock event that updates the shapes’ colors on every frame.

tick_func = None

The tick_stage() generator that updates the shapes on every frame.

count = 0

The current global frame count, reset for each experiment.

This number divided by the effective_frame_rate is the current global experiment time.

experiment_uuid: bytes = b''

A unique uuid that is re-generated before each experiment and sent along over the corner pixel as the initial uniquely-identifying handshake-pattern. It allows us to locate this experiment in the MCS data post-hoc.

See DataSerializerBase.

effective_frame_rate: fractions.Fraction

The effective frame rate at which the experiment’s shapes is updated.

E.g. in 'QUAD4X' video_mode shapes are updated at about 4 * 120Hz = 480Hz.

It is read only and automatically computed.

serializer = None

The ceed.storage.controller.DataSerializerBase.get_bits() generator instance that generates the corner pixel value.

It is advanced for each frame and its value set to the 24-bits of the corner pixel.

serializer_tex = None

The kivy texture that displays the corner pixel value on screen.

queue_view_read: multiprocessing.context.BaseContext.Queue = None

The queue used by the second viewer process side to receive messages from the main GUI controller side.

queue_view_write: multiprocessing.context.BaseContext.Queue = None

The queue used by the second viewer process side to send messages to the main GUI controller side.

propixx_lib

True when the propixx python library (pypixxlib) is available. Read-only.

shape_views: List[Dict[str, kivy.graphics.context_instructions.Color]] = []

List of kivy shapes graphics instructions added to the current_canvas.

These are the shape’s whose color and intensity is controlled by the experiment.

stage_shape_names: List[str] = []

List of all the shape names used during this experiment.

stages_with_gl: List[List[ceed.stage.CeedStage]] = []

The list of stages that returned True in add_gl_to_canvas() and need to be called for every frame.

frame_estimation: ceed.view.controller.FrameEstimation = None

The running-median based frame dropping estimator.

See FrameEstimation.

teensy_frame_estimation: ceed.view.controller.TeensyFrameEstimation = None

The Teensy based frame dropping estimator.

See TeensyFrameEstimation.

request_process_data(data_type, data)

Called during the experiment, either by the second or main Ceed process (when previewing) to pass data to the main controller to be logged or displayed.

It is the general interface by which the frame callbacks pass data back to the controller.

add_graphics(canvas, stage: ceed.stage.CeedStage, black_back=False)

Called at the start of the experiment to add all the graphics required to visualize the shapes, to the current_canvas.

start_stage(stage_name: str, canvas)

Starts the experiment using the special last_experiment_stage_name stage.

It adds the graphics instructions to the canvas, saves it as current_canvas, and starts playing the experiment using the stage.

stage_name is ignored because we use the special stage instead.

end_stage()

Ends the current experiment, if one is running.

tick_callback(*largs)

Called for every CPU Clock frame to handle any processing work.

If not use_software_frame_rate and if the GPU restricts the CPU to the GPU refresh rate, then this is called once before each frame is rendered so we can update the projector at the expected frame rate.

Before the experiment starts for real we do 50 empty warmup frames. Warmup is required to ensure the projector LED had time to change to the experiment value LED_mode (compared to LED_mode_idle). In addition to allowing us to estimate the time of the first experiment frame render for FrameEstimation.

flip_callback(*largs)

Called for every GPU rendered frame by the graphics system.

This method lets us estimate the rendering times and if we need to drop frames.

class ceed.view.controller.ViewSideViewControllerBase(**kwargs)

Bases: ceed.view.controller.ViewControllerBase

This class is used for experiment control when Ceed is running a real experiment in the second Ceed process.

If Ceed is running in the second process started with view_process_enter(), then this is a “real” experiment and this class is used. It has a inter-process queue from which it gets messages from the main Ceed process, such as to start or stop an experiment. It also sends back messages to the main process including data about the rendered frames and data to be logged.

start_stage(stage_name, canvas)

Starts the experiment using the special last_experiment_stage_name stage.

It adds the graphics instructions to the canvas, saves it as current_canvas, and starts playing the experiment using the stage.

stage_name is ignored because we use the special stage instead.

end_stage()

Ends the current experiment, if one is running.

request_process_data(data_type, data)

Called during the experiment, either by the second or main Ceed process (when previewing) to pass data to the main controller to be logged or displayed.

It is the general interface by which the frame callbacks pass data back to the controller.

send_keyboard_down(key, modifiers, t)

Gets called by the window for every keyboard key press, which it sends on to the main GUI process to handle.

send_keyboard_up(key, t)

Gets called by the window for every keyboard key release, which it sends on to the main GUI process to handle.

handle_exception(exception, exc_info=None)

Called upon an error which is passed on to the main process.

view_read(*largs)

Communication between the two process occurs through queues. This method is run periodically by the Kivy Clock to serve the queue and read and handle messages from the main GUI.

prepare_view_window(*largs)

Called before CeedViewApp is run, to prepare the new window according to the configuration parameters.

ceed.view.controller.view_process_enter(read: multiprocessing.context.BaseContext.Queue, write: multiprocessing.context.BaseContext.Queue, settings: Dict[str, Any], app_settings: dict, shared_value: multiprocessing.context.BaseContext.Value)

Entry method for the second Ceed process that runs “real” experiments.

It is called by this process when it is created. This in turns configures the app and then runs it until it’s closed.

The experiment is run in this process by ViewSideViewControllerBase. It receives control messages and sends back data to the main process over the provided queues. ControllerSideViewControllerBase handles these queues on the main process side.

class ceed.view.controller.ControllerSideViewControllerBase(**kwargs)

Bases: ceed.view.controller.ViewControllerBase

This class is used by the main Ceed process to control experiments run either as previews (in the main Ceed process) or as a real experiment (in a second process).

If the experiment is run in the second process, then that second process runs ViewSideViewControllerBase and this class is used by the main process to send control messages and receive experiment data from that process over queues.

Otherwise, this class directly controls the experiment.

view_process: Optional[multiprocessing.context.Process]

The second process that runs “real” experiments in full-screen mode. See view_process_enter().

selected_stage_name = ''

The name of the stage currently selected in the GUI to be run.

This will be the stage that is copied and run.

initial_cam_image: Optional[ffpyplayer.pic.Image] = None

The last camera image received before the experiment starts, if any.

See also last_cam_image.

It is only set for a “real” experiment, not during preview.

last_cam_image: Optional[ffpyplayer.pic.Image]

After the experiment ends, this contains the last camera image acquired before the experiment ended. If no image was taken during the experiment, this is the image from before the experiment if there’s one.

This allows us to keep the last image generated by the tissue in response to experiment stimulation. In the GUI, after the experiment ended, there’s a button which when pressed will take this image (if not None) and set it as the camera image.

It is only set for a “real” experiment, not during preview.

See also proj_pixels.

proj_size = None

If last_cam_image is an image and not None, this contains the screen size from which the proj_pixels were generated.

It’s the second index value of the tuple returned by get_root_pixels().

It is only set for a “real” experiment, not during preview.

proj_pixels = None

If last_cam_image is an image and not None, this contains the pixel intensity values for all the pixels shown during the last frame before the experiment ended.

Together with last_cam_image, this lets you compare the pixels displayed on the projector to the image from the tissue lighting up in response to those pixels.

It’s the first index value of the tuple returned by get_root_pixels().

It is only set for a “real” experiment, not during preview.

add_graphics(canvas, stage: ceed.stage.CeedStage, black_back=True)

Called at the start of the experiment to add all the graphics required to visualize the shapes, to the current_canvas.

request_stage_start(stage_name: str, experiment_uuid: Optional[bytes] = None) None

Starts the experiment using the stage, either running it in the GUI when previewing or in the second process.

This internally calls the appropriate ViewControllerBase.start_stage() method either for ViewSideViewControllerBase or ControllerSideViewControllerBase so this should be used to start the experiment.

request_stage_end()

Ends the currently running experiment, whether it’s running in the GUI when previewing or in the second process.

This internally calls the appropriate ViewControllerBase.end_stage() method either for ViewSideViewControllerBase or ControllerSideViewControllerBase so this should be used to stop the experiment.

stage_end_cleanup(state=None)

Automatically called by Ceed after a request_stage_end() request and it cleans up any resources and finalizes the last experiment.

end_stage()

Ends the current experiment, if one is running.

request_fullscreen(state)

Sets the fullscreen state of the second Ceed process.

request_process_data(data_type, data)

Called during the experiment, either by the second or main Ceed process (when previewing) to pass data to the main controller to be logged or displayed.

It is the general interface by which the frame callbacks pass data back to the controller.

start_process()

Starts the second Ceed process that runs the “real” experiment using ViewSideViewControllerBase.

stop_process()

Ends the view_process process by sending a EOF to the second process.

finish_stop_process()

Automatically called by Ceed through the read queue when we receive the message that the second process received the stop_process() EOF and that it stopped.

handle_key_press(key, t, modifiers=[], down=True)

Called by by the read queue thread when we receive a keypress event from the second process.

In response it e.g. starts/stops the experiment, closes the second process etc.

controller_read(*largs)

Called periodically by the Kivy Clock to serve the queue that receives messages from the second Ceed process.

set_pixel_mode(state, ignore_exception=False)

Sets the projector pixel mode to show the corner pixel on the controller IO.

It is called with True before the experiment starts and with False when it ends.

set_led_mode(mode, ignore_exception=False)

Sets the projector’s LED mode to one of the ViewControllerBase.led_modes.

set_video_mode(mode, ignore_exception=False)

Sets the projector’s video mode to one of the ViewControllerBase.video_modes.

set_leds_current()

Sets the projector’s RGB LEDs to the current given in the settings.

ceed.view.controller.ignore_vpixx_import_error = False

Ceed requires the pypixxlib package to control the projector. Ceed can still run in demo mode with it being installed (it requires hardware drivers to install) and it will ignore any projector commands.

Set this to True to make it skip the projector commands. E.g. during testing on the CI.