| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
| |
Time arrays are generated deterministically using np.linspace,
so checking for monotonicity is redundant and causes issues
with very large arrays due to float32 precision limits.
This removes the check entirely since any non-monotonic
time array would indicate a fundamental issue with numpy
itself, not our code.
|
| | |
|
| |
|
|
|
|
|
|
| |
- Generate time arrays with float64 precision before converting to float32
- This prevents precision loss that causes non-monotonic time arrays
- Affects datasets with >10 million samples (e.g., 200M points at 20 MHz)
- Changed empty time array warning to raise RuntimeError for better error handling
- Applied fix to both rd() and rd_chunked() functions
|
| |
|
|
|
|
|
|
| |
- Simplified monotonic check to just require positive differences
- Removed machine epsilon tolerance that was too strict for ns-scale timing
- Increased marker size in diffusion scatter plots
- Added version bump to 2.1.0
- Fixed edge cases in ACF calculation and diffusion processing
|
| | |
|
| | |
|
| |
|
|
|
|
| |
- Test that resampling preserves event detection
- Test uniform time arrays after resampling
- All tests passing
|
| | |
|
| |
|
|
|
|
| |
Add target_sampling_interval parameter to detect_from_wfm() for
downsampling oversampled data.
Minor version bump for new feature.
|
| |
|
|
|
|
|
|
|
| |
- Add transivent.resample subpackage for signal preprocessing
- Implement average_downsample() for block averaging downsampling
- Add downsample_to_interval() for interval-based downsampling
- Add target_sampling_interval parameter to detect_from_wfm()
- Preserves signal amplitude and improves SNR through averaging
- Creates uniform time arrays to avoid validation warnings
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
- Test absolute filename with data_path (main bug fix scenario)
- Test relative filename with data_path
- Test missing XML sidecar error handling
- Added pytest as dev dependency
|
| |
|
|
|
|
|
|
|
|
|
|
| |
When rd() is called with a data_path, it needs to pass this through to
get_waveform_params() so the XML sidecar file can be found. Previously,
if the filename was already an absolute path, data_path would be ignored,
causing FileNotFoundError when XML sidecar was in a different directory.
This fix ensures that:
- If data_path is provided, it's always used with the basename
- If data_path is None, the file's directory is used as data_path
- This maintains backward compatibility while fixing the path resolution
|
| |
|
|
|
|
| |
- example.py: Update to use detect_from_wfm() instead of process_file()
- example_diffusion.py: Import diffusion functions from event_processor submodule
- Both examples now correctly use the v2.0.0 API structure
|
| |
|
|
|
|
|
|
| |
The configure_logging function was accidentally removed from the public API
in __init__.py but was still being used in examples. This restores it to
maintain backward compatibility.
Fixes issue where examples would fail with ImportError.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Major API refactoring with simplified public interface.
- Added EventProcessor for high-level event processing workflow
- New utility functions for data preprocessing
- Additional example scripts for different use cases
- Comprehensive test suite
- Updated documentation with migration guide
|
|
|
Event detection and analysis pipeline for transient events in time-series data.
- Event detection based on SNR thresholds
- Configurable background estimation and noise analysis
- Visualization with scopekit integration
- Chunked processing for large files
|