Open-Source Python Module for the Analysis of Personalized Light Exposure Data from Wearable Light Loggers and Dosimeters
Light exposure fundamentally influences human physiology and behavior, with light being the most important of the circadian system. Throughout the day, people are exposed to various scenes differing in light level, spectral composition and spatio-temporal properties. Personalized light exposure can be measured through wearable light loggers and dosimeters, including wrist-worn actimeters containing light sensors, yielding time series of an individual's light exposure. There is growing interest in relating light exposure patterns to health outcomes, requiring analytic techniques to summarize light exposure properties. Building on the previously published Python-based module, here we introduce the module . This module allows users to extract light exposure data recordings from a wide range of devices. It also includes software tools to clean and filter the data, and to compute common metrics for quantifying and visualizing light exposure data. For this tutorial, we demonstrate the use of pyLight in one example dataset with the following processing steps: (1) loading, accessing and visual inspection of a publicly available dataset, (2) truncation, masking, filtering and binarization of the dataset, (3) calculation of summary metrics, including time above threshold (TAT) and mean light timing above threshold (MLiT). The module paves the way for open-source, large-scale automated analyses of light-exposure data.
A Snapshot of 118 Solid State Lighting Testing Laboratories' Capabilities
The National Institute of Standards and Technology (NIST) began to offer proficiency testing for Solid-State Lighting (SSL) products through a Measurement Assurance Program (MAP) in 2010. The MAP program provided proficiency testing complimenting laboratory accreditation to ensure that as SSL products became more prevalent, capable testing laboratories would be available to handle the volume of measurement work. This article communicates the results of the first version of the MAP in which 118 worldwide laboratories participated. The results of the comparison provide a snapshot of the capabilities of accredited laboratories worldwide. Statistical analysis of how the laboratories' measurements compared to NIST's measurements for photometric, colorimetric, and electrical quantities and fit parameters for each measurement are presented. In general, all the laboratory results are within +/- 4 % for total luminous flux and luminous efficacy measurements. The discussion provides reasons for any discrepancies or large uncertainty intervals found in the data. For example, a major finding was that measured differences of RMS current had a larger standard deviation and number of outliers than expected. Two possible explanations are (1) the discrepancies are due to issues with using 4-pole sockets, and (2) the large deviation is caused by some solid state lamps being sensitive to impedance and slew rate of AC power supplies. Further research in this area is being conducted by NIST to help the testing community reach more consistent measurement results.
Evaluating the Visibility of Architectural Features for People with Low Vision -A Quantitative Approach
Most people with low vision rely on their remaining functional vision for mobility. Our goal is to provide tools to help design architectural spaces in which safe and effective mobility is possible by those with low vision---spaces that we refer to as . We describe an approach that starts with a 3D CAD model of a planned space and produces labeled images indicating whether or not structures that are potential mobility hazards are visible at a particular level of low vision. There are two main parts to the analysis. The first, previously described, represents low-vision status by filtering a calibrated luminance image generated from the CAD model and associated lighting and materials information to produce a new image with unseen detail removed. The second part, described in this paper, uses both these filtered images and information about the geometry of the space obtained from the CAD model and related lighting and surface material specifications to produce a quantitative estimate of the likelihood of particular hazards being visible. We provide examples of the workflow required, a discussion of the novelty and implications of the approach, and a short discussion of needed future work.