Compare commits

...

2 Commits

Author SHA1 Message Date
Roboboffin
3ac2cc15b1 Fix line endings and some cleanup 2025-10-23 04:00:21 +01:00
Roboboffin
24eb354ace Normalize line endings per .gitattributes 2025-10-23 03:59:37 +01:00
63 changed files with 2634 additions and 2572 deletions

39
.gitattributes vendored Normal file
View File

@@ -0,0 +1,39 @@
# Auto-detect text files and normalize to LF in the repo
* text=auto
# Explicit text rules
*.md text
*.txt text
*.json text
*.yml text
*.yaml text
*.toml text
*.c text
*.cpp text
*.h text
*.hpp text
*.rs text
*.py text
*.go text
*.js text
*.ts text
*.css text
*.html text
*.sh text eol=lf # scripts on Unix must be LF
*.bash text eol=lf
# Windows-native scripts that should remain CRLF in working trees
*.bat text eol=crlf
*.cmd text eol=crlf
*.ps1 text eol=crlf
# Binary (never touch line endings)
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.pdf binary
*.zip binary
*.exe binary
*.dll binary
*.glb binary

2
.gitignore vendored
View File

@@ -1,2 +1,4 @@
Builds
NeuralSynth Installer.exe
AudioPluginHost
Projucer

19
AGENTS.md Normal file
View File

@@ -0,0 +1,19 @@
# Repository Guidelines
## Project Structure & Module Organization
NeuralSynth is a JUCE-based synthesizer plugin. Core runtime code lives in `Source/`, with `PluginProcessor.*` orchestrating audio/MIDI flow, `PluginEditor.*` driving the UI, `SynthVoice.*` implementing per-voice DSP, and helper headers such as `AudioEngine.h` and `NeuralSharedParams.h` exposing shared state. JUCE-generated scaffolding sits in `JuceLibraryCode/`; regenerate it through `NeuralSynth.jucer` rather than editing by hand. Platform build assets are under `Builds/` (for example `Builds/LinuxMakefile/`), and finished binaries default to `Builds/LinuxMakefile/build/`. Install scripting for Windows lives in `NeuralSynth.iss`.
## Build, Test, and Development Commands
- `cd Builds/LinuxMakefile && make CONFIG=Debug` compiles the standalone app and VST3 with debug symbols.
- `cd Builds/LinuxMakefile && make CONFIG=Release` builds optimized artefacts for distribution.
- `cd Builds/LinuxMakefile && make clean` removes intermediate objects when builds misbehave.
- `./Builds/LinuxMakefile/build/NeuralSynth` launches the standalone target; VST3 binaries appear in `Builds/LinuxMakefile/NeuralSynth.vst3` and copy into `~/.vst3` when the Makefile post-build step runs.
## Coding Style & Naming Conventions
C++ sources use 4-space indentation, brace-on-new-line functions, and JUCEs `juce::` namespace types. Prefer `PascalCase` for classes (e.g., `NeuralSynthAudioProcessor`), camelCase for methods and members (`prepareToPlay`, `audioEngine`), and suffix queues/collectors clearly (`AudioBufferQueue`, `ScopeDataCollector`). Match existing lambda formatting in `SynthVoice.cpp`, keep includes sorted locally, and avoid editing generated files under `JuceLibraryCode/`.
## Testing Guidelines
Automated tests are not yet configured; rely on manual validation in the standalone app or a DAW host. After each change, rebuild and audition key features: oscillator switching, chorus/delay/reverb chains, parameter automation, and MIDI input. For DSP tweaks, monitor the oscilloscope components linked to the buffer queues to confirm signal stability. Document ad-hoc test coverage in your pull request until formal tests are added.
## Commit & Pull Request Guidelines
Follow the existing concise, imperative commit style (`Add chorus modulation`, `Fix voice detune`). Scope each commit to a logical change and format messages as a single summary line. Pull requests should describe the motivation, outline testing performed, and link issues when relevant. Include platform notes (Linux, Windows installer) and screenshots or audio clips for UI-affecting or sonic changes so reviewers can assess impact quickly.

View File

@@ -1,12 +1,12 @@
Important Note!!
================
The purpose of this folder is to contain files that are auto-generated by the Projucer,
and ALL files in this folder will be mercilessly DELETED and completely re-written whenever
the Projucer saves your project.
Therefore, it's a bad idea to make any manual changes to the files in here, or to
put any of your own files in here if you don't want to lose them. (Of course you may choose
to add the folder's contents to your version-control system so that you can re-merge your own
modifications after the Projucer has saved its changes).
Important Note!!
================
The purpose of this folder is to contain files that are auto-generated by the Projucer,
and ALL files in this folder will be mercilessly DELETED and completely re-written whenever
the Projucer saves your project.
Therefore, it's a bad idea to make any manual changes to the files in here, or to
put any of your own files in here if you don't want to lose them. (Of course you may choose
to add the folder's contents to your version-control system so that you can re-merge your own
modifications after the Projucer has saved its changes).

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_basics/juce_audio_basics.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_basics/juce_audio_basics.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_basics/juce_audio_basics.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_basics/juce_audio_basics.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_devices/juce_audio_devices.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_devices/juce_audio_devices.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_devices/juce_audio_devices.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_devices/juce_audio_devices.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_formats/juce_audio_formats.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_formats/juce_audio_formats.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_formats/juce_audio_formats.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_formats/juce_audio_formats.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX_utils.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AAX_utils.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_ARA.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_ARA.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AU_1.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AU_1.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AU_2.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AU_2.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AUv3.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_AUv3.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_LV2.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_LV2.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_LV2.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_LV2.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_Standalone.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_Standalone.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_Unity.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_Unity.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST2.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST2.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST2.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST2.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST3.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST3.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST3.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_plugin_client/juce_audio_plugin_client_VST3.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors_ara.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors_ara.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors_lv2_libs.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_processors/juce_audio_processors_lv2_libs.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_utils/juce_audio_utils.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_utils/juce_audio_utils.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_utils/juce_audio_utils.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_audio_utils/juce_audio_utils.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core_CompilationTime.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_core/juce_core_CompilationTime.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_data_structures/juce_data_structures.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_data_structures/juce_data_structures.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_data_structures/juce_data_structures.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_data_structures/juce_data_structures.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_dsp/juce_dsp.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_dsp/juce_dsp.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_dsp/juce_dsp.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_dsp/juce_dsp.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_events/juce_events.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_events/juce_events.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_events/juce_events.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_events/juce_events.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics_Harfbuzz.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics_Harfbuzz.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics_Sheenbidi.c>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_graphics/juce_graphics_Sheenbidi.c>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_basics/juce_gui_basics.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_basics/juce_gui_basics.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_basics/juce_gui_basics.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_basics/juce_gui_basics.mm>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_extra/juce_gui_extra.cpp>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_extra/juce_gui_extra.cpp>

View File

@@ -1,8 +1,8 @@
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_extra/juce_gui_extra.mm>
/*
IMPORTANT! This file is auto-generated each time you save your
project - if you alter its contents, your changes may be overwritten!
*/
#include <juce_gui_extra/juce_gui_extra.mm>

View File

@@ -7,6 +7,8 @@
pluginVSTNumMidiInputs="1" pluginChannelConfigs="{0, 2}" version="0.0.1">
<MAINGROUP id="UQstsW" name="NeuralSynth">
<GROUP id="{D5B48DA9-9A47-914A-8C72-EE5E8DD868A3}" name="Source">
<FILE id="Mkx0uo" name="BlepOsc.h" compile="0" resource="0" file="Source/BlepOsc.h"/>
<FILE id="axDpEq" name="WavetableOsc.h" compile="0" resource="0" file="Source/WavetableOsc.h"/>
<FILE id="nmKMnf" name="GraphComponent.h" compile="0" resource="0"
file="Source/GraphComponent.h"/>
<FILE id="CjJ141" name="NeuralSharedParams.h" compile="0" resource="0"

View File

@@ -1,49 +1,49 @@
#pragma once
//==============================================================================
template <typename SampleType>
class AudioBufferQueue
{
public:
//==============================================================================
static constexpr size_t order = 9;
static constexpr size_t bufferSize = 1U << order;
static constexpr size_t numBuffers = 5;
//==============================================================================
void push(const SampleType* dataToPush, size_t numSamples)
{
jassert(numSamples <= bufferSize);
int start1, size1, start2, size2;
abstractFifo.prepareToWrite(1, start1, size1, start2, size2);
jassert(size1 <= 1);
jassert(size2 == 0);
if (size1 > 0)
juce::FloatVectorOperations::copy(buffers[(size_t)start1].data(), dataToPush, (int)juce::jmin(bufferSize, numSamples));
abstractFifo.finishedWrite(size1);
}
//==============================================================================
void pop(SampleType* outputBuffer)
{
int start1, size1, start2, size2;
abstractFifo.prepareToRead(1, start1, size1, start2, size2);
jassert(size1 <= 1);
jassert(size2 == 0);
if (size1 > 0)
juce::FloatVectorOperations::copy(outputBuffer, buffers[(size_t)start1].data(), (int)bufferSize);
abstractFifo.finishedRead(size1);
}
private:
//==============================================================================
juce::AbstractFifo abstractFifo{ numBuffers };
std::array<std::array<SampleType, bufferSize>, numBuffers> buffers;
#pragma once
//==============================================================================
template <typename SampleType>
class AudioBufferQueue
{
public:
//==============================================================================
static constexpr size_t order = 9;
static constexpr size_t bufferSize = 1U << order;
static constexpr size_t numBuffers = 5;
//==============================================================================
void push(const SampleType* dataToPush, size_t numSamples)
{
jassert(numSamples <= bufferSize);
int start1, size1, start2, size2;
abstractFifo.prepareToWrite(1, start1, size1, start2, size2);
jassert(size1 <= 1);
jassert(size2 == 0);
if (size1 > 0)
juce::FloatVectorOperations::copy(buffers[(size_t)start1].data(), dataToPush, (int)juce::jmin(bufferSize, numSamples));
abstractFifo.finishedWrite(size1);
}
//==============================================================================
void pop(SampleType* outputBuffer)
{
int start1, size1, start2, size2;
abstractFifo.prepareToRead(1, start1, size1, start2, size2);
jassert(size1 <= 1);
jassert(size2 == 0);
if (size1 > 0)
juce::FloatVectorOperations::copy(outputBuffer, buffers[(size_t)start1].data(), (int)bufferSize);
abstractFifo.finishedRead(size1);
}
private:
//==============================================================================
juce::AbstractFifo abstractFifo{ numBuffers };
std::array<std::array<SampleType, bufferSize>, numBuffers> buffers;
};

View File

@@ -1,47 +1,47 @@
#pragma once
#include "SynthVoice.h"
#include <JuceHeader.h>
class NeuralAudioEngine : public juce::MPESynthesiser
{
public:
static constexpr int maxNumVoices = 8;
explicit NeuralAudioEngine(NeuralSharedParams& sp)
{
// Create MPE voices
for (int i = 0; i < maxNumVoices; ++i)
addVoice(new NeuralSynthVoice(sp)); // <-- takes MPESynthesiserVoice*
// MPE synths do not use addSound(); note events are routed via MPE zones.
setVoiceStealingEnabled(true);
}
void prepare(const juce::dsp::ProcessSpec& spec) noexcept
{
setCurrentPlaybackSampleRate(spec.sampleRate);
for (auto* v : voices)
if (auto* nv = dynamic_cast<NeuralSynthVoice*>(v))
nv->prepare(spec);
}
template <typename VoiceFunc>
void applyToVoices(VoiceFunc&& fn) noexcept
{
for (auto* v : voices)
fn(dynamic_cast<NeuralSynthVoice*>(v));
}
private:
// keep base render
using juce::MPESynthesiser::renderNextSubBlock;
void renderNextSubBlock(juce::AudioBuffer<float>& outputAudio,
int startSample,
int numSamples) override
{
juce::MPESynthesiser::renderNextSubBlock(outputAudio, startSample, numSamples);
}
};
#pragma once
#include "SynthVoice.h"
#include <JuceHeader.h>
class NeuralAudioEngine : public juce::MPESynthesiser
{
public:
static constexpr int maxNumVoices = 8;
explicit NeuralAudioEngine(NeuralSharedParams& sp)
{
// Create MPE voices
for (int i = 0; i < maxNumVoices; ++i)
addVoice(new NeuralSynthVoice(sp)); // <-- takes MPESynthesiserVoice*
// MPE synths do not use addSound(); note events are routed via MPE zones.
setVoiceStealingEnabled(true);
}
void prepare(const juce::dsp::ProcessSpec& spec) noexcept
{
setCurrentPlaybackSampleRate(spec.sampleRate);
for (auto* v : voices)
if (auto* nv = dynamic_cast<NeuralSynthVoice*>(v))
nv->prepare(spec);
}
template <typename VoiceFunc>
void applyToVoices(VoiceFunc&& fn) noexcept
{
for (auto* v : voices)
fn(dynamic_cast<NeuralSynthVoice*>(v));
}
private:
// keep base render
using juce::MPESynthesiser::renderNextSubBlock;
void renderNextSubBlock(juce::AudioBuffer<float>& outputAudio,
int startSample,
int numSamples) override
{
juce::MPESynthesiser::renderNextSubBlock(outputAudio, startSample, numSamples);
}
};

View File

@@ -1,80 +1,80 @@
#pragma once
#include <JuceHeader.h>
enum class BlepWave : int { Sine = 0, Saw, Square, Triangle };
class BlepOsc
{
public:
void prepare (double sampleRate) { sr = sampleRate; resetPhase(); }
void setWave (BlepWave w) { wave = w; }
void setFrequency (float f) { freq = juce::jmax (0.0f, f); inc = freq / (float) sr; }
void resetPhase (float p = 0.0f) { phase = juce::jlimit (0.0f, 1.0f, p); }
inline float process()
{
// phase in [0..1)
float out = 0.0f;
float t = phase;
phase += inc;
if (phase >= 1.0f) phase -= 1.0f;
switch (wave)
{
case BlepWave::Sine: out = std::sin (2.0f * juce::MathConstants<float>::pi * t); break;
case BlepWave::Saw:
{
// naive saw in [-1..1]
float s = 2.0f * t - 1.0f;
// apply BLEP at the discontinuity crossing t=0
s -= polyBlep (t, inc);
out = s;
} break;
case BlepWave::Square:
{
float s = (t < 0.5f ? 1.0f : -1.0f);
// rising edge at 0.0, falling at 0.5
s += polyBlep (t, inc) - polyBlep (std::fmod (t + 0.5f, 1.0f), inc);
out = s;
} break;
case BlepWave::Triangle:
{
// integrate the BLEP square for band-limited tri
float sq = (t < 0.5f ? 1.0f : -1.0f);
sq += polyBlep (t, inc) - polyBlep (std::fmod (t + 0.5f, 1.0f), inc);
// leaky integrator to keep DC under control
z1 = z1 + (sq - z1) * inc;
out = 2.0f * z1; // scale
} break;
}
return out;
}
private:
// PolyBLEP as in Valimäki/Huovilainen
static inline float polyBlep (float t, float dt)
{
// t in [0..1)
if (t < dt)
{
t /= dt;
return t + t - t * t - 1.0f;
}
else if (t > 1.0f - dt)
{
t = (t - 1.0f) / dt;
return t * t + t + t + 1.0f;
}
return 0.0f;
}
double sr = 44100.0;
float freq = 440.0f, inc = 440.0f / 44100.0f;
float phase = 0.0f;
float z1 = 0.0f;
BlepWave wave = BlepWave::Sine;
};
#pragma once
#include <JuceHeader.h>
enum class BlepWave : int { Sine = 0, Saw, Square, Triangle };
class BlepOsc
{
public:
void prepare (double sampleRate) { sr = sampleRate; resetPhase(); }
void setWave (BlepWave w) { wave = w; }
void setFrequency (float f) { freq = juce::jmax (0.0f, f); inc = freq / (float) sr; }
void resetPhase (float p = 0.0f) { phase = juce::jlimit (0.0f, 1.0f, p); }
inline float process()
{
// phase in [0..1)
float out = 0.0f;
float t = phase;
phase += inc;
if (phase >= 1.0f) phase -= 1.0f;
switch (wave)
{
case BlepWave::Sine: out = std::sin (2.0f * juce::MathConstants<float>::pi * t); break;
case BlepWave::Saw:
{
// naive saw in [-1..1]
float s = 2.0f * t - 1.0f;
// apply BLEP at the discontinuity crossing t=0
s -= polyBlep (t, inc);
out = s;
} break;
case BlepWave::Square:
{
float s = (t < 0.5f ? 1.0f : -1.0f);
// rising edge at 0.0, falling at 0.5
s += polyBlep (t, inc) - polyBlep (std::fmod (t + 0.5f, 1.0f), inc);
out = s;
} break;
case BlepWave::Triangle:
{
// integrate the BLEP square for band-limited tri
float sq = (t < 0.5f ? 1.0f : -1.0f);
sq += polyBlep (t, inc) - polyBlep (std::fmod (t + 0.5f, 1.0f), inc);
// leaky integrator to keep DC under control
z1 = z1 + (sq - z1) * inc;
out = 2.0f * z1; // scale
} break;
}
return out;
}
private:
// PolyBLEP as in Valimäki/Huovilainen
static inline float polyBlep (float t, float dt)
{
// t in [0..1)
if (t < dt)
{
t /= dt;
return t + t - t * t - 1.0f;
}
else if (t > 1.0f - dt)
{
t = (t - 1.0f) / dt;
return t * t + t + t + 1.0f;
}
return 0.0f;
}
double sr = 44100.0;
float freq = 440.0f, inc = 440.0f / 44100.0f;
float phase = 0.0f;
float z1 = 0.0f;
BlepWave wave = BlepWave::Sine;
};

View File

@@ -1,99 +1,99 @@
/*
==============================================================================
GraphComponent.h
Created: 4 Jul 2025 11:43:57pm
Author: timot
==============================================================================
*/
#pragma once
#include <algorithm> // for std::minmax_element
#include "AudioBufferQueue.h"
//==============================================================================
template <typename SampleType>
class GraphComponent : public juce::Component,
private juce::Timer
{
public:
//==============================================================================
GraphComponent(SampleType minIn, SampleType maxIn, int numPointsIn)
: min(minIn), max(maxIn), numPoints(numPointsIn)
{
x.resize(numPoints);
y.resize(numPoints);
setFramesPerSecond(30);
// func will be set via setFunction before paint; provide a safe default
func = [](SampleType) noexcept { return SampleType(); };
}
//==============================================================================
void setFramesPerSecond(int framesPerSecond)
{
jassert(framesPerSecond > 0 && framesPerSecond < 1000);
startTimerHz(framesPerSecond);
}
//==============================================================================
void setFunction(const std::function<SampleType(SampleType)>& f) { func = f; }
//==============================================================================
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::black);
g.setColour(juce::Colours::white);
auto area = getLocalBounds();
if (hasData && area.isFinite())
{
auto h = (SampleType)area.getHeight();
auto w = (SampleType)area.getWidth();
for (size_t i = 1; i < (size_t)numPoints; ++i)
{
auto px_prev = ((x[i - 1] - min) / (max - min)) * w;
auto py_prev = h - ((y[i - 1] - minY) / (maxY - minY)) * h;
auto px_next = ((x[i] - min) / (max - min)) * w;
auto py_next = h - ((y[i] - minY) / (maxY - minY)) * h;
g.drawLine({ px_prev, py_prev, px_next, py_next });
}
}
}
//==============================================================================
void resized() override {}
private:
//==============================================================================
std::vector<SampleType> x, y;
SampleType minY{ SampleType() }, maxY{ SampleType(1) };
SampleType min{}, max{};
int numPoints{};
std::function<SampleType(SampleType)> func;
bool hasData = false;
//==============================================================================
void timerCallback() override
{
const SampleType step = (max - min) / (SampleType)(numPoints - 1);
for (int i = 0; i < numPoints; i++)
{
x[(size_t)i] = min + step * (SampleType)i;
y[(size_t)i] = func(x[(size_t)i]);
}
auto p = std::minmax_element(y.begin(), y.end());
minY = *p.first;
maxY = *p.second;
hasData = true;
repaint();
}
};
/*
==============================================================================
GraphComponent.h
Created: 4 Jul 2025 11:43:57pm
Author: timot
==============================================================================
*/
#pragma once
#include <algorithm> // for std::minmax_element
#include "AudioBufferQueue.h"
//==============================================================================
template <typename SampleType>
class GraphComponent : public juce::Component,
private juce::Timer
{
public:
//==============================================================================
GraphComponent(SampleType minIn, SampleType maxIn, int numPointsIn)
: min(minIn), max(maxIn), numPoints(numPointsIn)
{
x.resize(numPoints);
y.resize(numPoints);
setFramesPerSecond(30);
// func will be set via setFunction before paint; provide a safe default
func = [](SampleType) noexcept { return SampleType(); };
}
//==============================================================================
void setFramesPerSecond(int framesPerSecond)
{
jassert(framesPerSecond > 0 && framesPerSecond < 1000);
startTimerHz(framesPerSecond);
}
//==============================================================================
void setFunction(const std::function<SampleType(SampleType)>& f) { func = f; }
//==============================================================================
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::black);
g.setColour(juce::Colours::white);
auto area = getLocalBounds();
if (hasData && area.isFinite())
{
auto h = (SampleType)area.getHeight();
auto w = (SampleType)area.getWidth();
for (size_t i = 1; i < (size_t)numPoints; ++i)
{
auto px_prev = ((x[i - 1] - min) / (max - min)) * w;
auto py_prev = h - ((y[i - 1] - minY) / (maxY - minY)) * h;
auto px_next = ((x[i] - min) / (max - min)) * w;
auto py_next = h - ((y[i] - minY) / (maxY - minY)) * h;
g.drawLine({ px_prev, py_prev, px_next, py_next });
}
}
}
//==============================================================================
void resized() override {}
private:
//==============================================================================
std::vector<SampleType> x, y;
SampleType minY{ SampleType() }, maxY{ SampleType(1) };
SampleType min{}, max{};
int numPoints{};
std::function<SampleType(SampleType)> func;
bool hasData = false;
//==============================================================================
void timerCallback() override
{
const SampleType step = (max - min) / (SampleType)(numPoints - 1);
for (int i = 0; i < numPoints; i++)
{
x[(size_t)i] = min + step * (SampleType)i;
y[(size_t)i] = func(x[(size_t)i]);
}
auto p = std::minmax_element(y.begin(), y.end());
minY = *p.first;
maxY = *p.second;
hasData = true;
repaint();
}
};

View File

@@ -1,145 +1,145 @@
#pragma once
#include <atomic>
#include <unordered_map>
#include <string>
struct SliderDetail {
std::string label;
float min, max, interval, defValue;
};
using ParamMap = std::unordered_map<std::string, SliderDetail>;
// Each SliderDetail: { label, min, max, step, defaultValue }
const std::unordered_map<std::string, ParamMap> PARAM_SETTINGS = {
{ "chorus", {
{ "rate", { "Rate", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "depth", { "Depth", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "centre", { "Centre", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "feedback", { "Feedback", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "mix", { "Mix", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "delay", {
{ "delay", { "Delay", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "reverb", {
{ "roomSize", { "Room Size", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "damping", { "Damping", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "wetLevel", { "Wet Level", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "dryLevel", { "Dry Level", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "width", { "Width", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "freezeMode", { "Freeze Mode", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "adsr", {
{ "attack", { "Attack", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "decay", { "Decay", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "sustain", { "Sustain", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "release", { "Release", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
// Filter envelope group (short key: "fenv")
{ "fenv", {
{ "attack", { "Attack", 0.0f, 2.0f, 0.001f, 0.01f } },
{ "decay", { "Decay", 0.0f, 2.0f, 0.001f, 0.10f } },
{ "sustain", { "Sustain", 0.0f, 1.0f, 0.001f, 0.80f } },
{ "release", { "Release", 0.0f, 4.0f, 0.001f, 0.40f } },
{ "amount", { "Amount", -1.0f, 1.0f, 0.001f, 0.50f } }
}},
{ "flanger", {
{ "rate", { "Rate", 0.1f, 5.0f, 0.1f, 0.1f } },
{ "depth", { "Depth", 0.1f, 10.0f, 0.1f, 0.1f } }, // ms
{ "feedback", { "Feedback", 0.0f, 0.95f, 0.01f, 0.1f } },
{ "dryMix", { "Dry/Wet", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "phase", { "Phase", 0.0f, 1.0f, 0.1f, 0.0f } },
{ "delay", { "Delay", 0.0f, 3.0f, 0.1f, 0.25f } } // ms base
}},
{ "filter", {
{ "cutoff", { "Cutoff", 20.0f, 20000.0f, 1.0f, 1000.0f } },
{ "resonance", { "Resonance", 0.1f, 10.0f, 0.1f, 0.7f } },
{ "type", { "L/H/B", 0.0f, 2.0f, 1.0f, 0.0f } },
{ "drive", { "Drive", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "mod", { "Mod", -1.0f, 1.0f, 0.1f, 0.0f } },
{ "key", { "Key", 0.0f, 1.0f, 0.1f, 0.0f } }
}},
{ "distortion", {
{ "drive", { "Drive", 0.0f, 30.0f, 0.1f, 10.0f } },
{ "mix", { "Mix", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "bias", { "Bias", -1.0f, 1.0f, 0.01f, 0.0f } },
{ "tone", { "Tone", 100.0f, 8000.0f, 10.0f, 3000.0f } },
{ "shape", { "Shape", 0.0f, 2.0f, 1.0f, 0.0f } }
}}
};
struct NeuralSharedParams
{
std::atomic<int> waveform{ -1 };
// Amp ADSR
std::atomic<float>* adsrAttack{};
std::atomic<float>* adsrDecay{};
std::atomic<float>* adsrSustain{};
std::atomic<float>* adsrRelease{};
// Delay
std::atomic<float>* delayTime{};
// Chorus
std::atomic<float>* chorusRate{};
std::atomic<float>* chorusDepth{};
std::atomic<float>* chorusCentre{};
std::atomic<float>* chorusFeedback{};
std::atomic<float>* chorusMix{};
// Reverb
std::atomic<float>* reverbRoomSize{};
std::atomic<float>* reverbDamping{};
std::atomic<float>* reverbWetLevel{};
std::atomic<float>* reverbDryLevel{};
std::atomic<float>* reverbWidth{};
std::atomic<float>* reverbFreezeMode{};
// Flanger
std::atomic<float>* flangerRate{};
std::atomic<float>* flangerDepth{};
std::atomic<float>* flangerFeedback{};
std::atomic<float>* flangerDryMix{};
std::atomic<float>* flangerPhase{};
std::atomic<float>* flangerDelay{};
// Filter (base)
std::atomic<float>* filterCutoff{};
std::atomic<float>* filterResonance{};
std::atomic<float>* filterType{};
std::atomic<float>* filterDrive{};
std::atomic<float>* filterMod{};
std::atomic<float>* filterKey{};
// Filter Env (polyphonic)
std::atomic<float>* fenvAttack{};
std::atomic<float>* fenvDecay{};
std::atomic<float>* fenvSustain{};
std::atomic<float>* fenvRelease{};
std::atomic<float>* fenvAmount{}; // +/- octaves
// Distortion
std::atomic<float>* distortionDrive{};
std::atomic<float>* distortionMix{};
std::atomic<float>* distortionBias{};
std::atomic<float>* distortionTone{};
std::atomic<float>* distortionShape{};
// Per-panel bypass (AudioParameterBool, exposed as float 0/1 via getRawParameterValue)
std::atomic<float>* chorusOn{};
std::atomic<float>* delayOn{};
std::atomic<float>* reverbOn{};
std::atomic<float>* flangerOn{};
std::atomic<float>* distortionOn{};
std::atomic<float>* filterOn{};
std::atomic<float>* eqOn{};
// EQ + Master
std::atomic<float>* lowGainDbls{};
std::atomic<float>* midGainDbls{};
std::atomic<float>* highGainDbls{};
std::atomic<float>* masterDbls{};
};
#pragma once
#include <atomic>
#include <unordered_map>
#include <string>
struct SliderDetail {
std::string label;
float min, max, interval, defValue;
};
using ParamMap = std::unordered_map<std::string, SliderDetail>;
// Each SliderDetail: { label, min, max, step, defaultValue }
const std::unordered_map<std::string, ParamMap> PARAM_SETTINGS = {
{ "chorus", {
{ "rate", { "Rate", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "depth", { "Depth", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "centre", { "Centre", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "feedback", { "Feedback", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "mix", { "Mix", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "delay", {
{ "delay", { "Delay", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "reverb", {
{ "roomSize", { "Room Size", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "damping", { "Damping", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "wetLevel", { "Wet Level", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "dryLevel", { "Dry Level", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "width", { "Width", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "freezeMode", { "Freeze Mode", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
{ "adsr", {
{ "attack", { "Attack", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "decay", { "Decay", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "sustain", { "Sustain", 0.0f, 1.0f, 0.1f, 0.1f } },
{ "release", { "Release", 0.0f, 1.0f, 0.1f, 0.1f } }
}},
// Filter envelope group (short key: "fenv")
{ "fenv", {
{ "attack", { "Attack", 0.0f, 2.0f, 0.001f, 0.01f } },
{ "decay", { "Decay", 0.0f, 2.0f, 0.001f, 0.10f } },
{ "sustain", { "Sustain", 0.0f, 1.0f, 0.001f, 0.80f } },
{ "release", { "Release", 0.0f, 4.0f, 0.001f, 0.40f } },
{ "amount", { "Amount", -1.0f, 1.0f, 0.001f, 0.50f } }
}},
{ "flanger", {
{ "rate", { "Rate", 0.1f, 5.0f, 0.1f, 0.1f } },
{ "depth", { "Depth", 0.1f, 10.0f, 0.1f, 0.1f } }, // ms
{ "feedback", { "Feedback", 0.0f, 0.95f, 0.01f, 0.1f } },
{ "dryMix", { "Dry/Wet", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "phase", { "Phase", 0.0f, 1.0f, 0.1f, 0.0f } },
{ "delay", { "Delay", 0.0f, 3.0f, 0.1f, 0.25f } } // ms base
}},
{ "filter", {
{ "cutoff", { "Cutoff", 20.0f, 20000.0f, 1.0f, 1000.0f } },
{ "resonance", { "Resonance", 0.1f, 10.0f, 0.1f, 0.7f } },
{ "type", { "L/H/B", 0.0f, 2.0f, 1.0f, 0.0f } },
{ "drive", { "Drive", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "mod", { "Mod", -1.0f, 1.0f, 0.1f, 0.0f } },
{ "key", { "Key", 0.0f, 1.0f, 0.1f, 0.0f } }
}},
{ "distortion", {
{ "drive", { "Drive", 0.0f, 30.0f, 0.1f, 10.0f } },
{ "mix", { "Mix", 0.0f, 1.0f, 0.01f, 0.0f } },
{ "bias", { "Bias", -1.0f, 1.0f, 0.01f, 0.0f } },
{ "tone", { "Tone", 100.0f, 8000.0f, 10.0f, 3000.0f } },
{ "shape", { "Shape", 0.0f, 2.0f, 1.0f, 0.0f } }
}}
};
struct NeuralSharedParams
{
std::atomic<int> waveform{ -1 };
// Amp ADSR
std::atomic<float>* adsrAttack{};
std::atomic<float>* adsrDecay{};
std::atomic<float>* adsrSustain{};
std::atomic<float>* adsrRelease{};
// Delay
std::atomic<float>* delayTime{};
// Chorus
std::atomic<float>* chorusRate{};
std::atomic<float>* chorusDepth{};
std::atomic<float>* chorusCentre{};
std::atomic<float>* chorusFeedback{};
std::atomic<float>* chorusMix{};
// Reverb
std::atomic<float>* reverbRoomSize{};
std::atomic<float>* reverbDamping{};
std::atomic<float>* reverbWetLevel{};
std::atomic<float>* reverbDryLevel{};
std::atomic<float>* reverbWidth{};
std::atomic<float>* reverbFreezeMode{};
// Flanger
std::atomic<float>* flangerRate{};
std::atomic<float>* flangerDepth{};
std::atomic<float>* flangerFeedback{};
std::atomic<float>* flangerDryMix{};
std::atomic<float>* flangerPhase{};
std::atomic<float>* flangerDelay{};
// Filter (base)
std::atomic<float>* filterCutoff{};
std::atomic<float>* filterResonance{};
std::atomic<float>* filterType{};
std::atomic<float>* filterDrive{};
std::atomic<float>* filterMod{};
std::atomic<float>* filterKey{};
// Filter Env (polyphonic)
std::atomic<float>* fenvAttack{};
std::atomic<float>* fenvDecay{};
std::atomic<float>* fenvSustain{};
std::atomic<float>* fenvRelease{};
std::atomic<float>* fenvAmount{}; // +/- octaves
// Distortion
std::atomic<float>* distortionDrive{};
std::atomic<float>* distortionMix{};
std::atomic<float>* distortionBias{};
std::atomic<float>* distortionTone{};
std::atomic<float>* distortionShape{};
// Per-panel bypass (AudioParameterBool, exposed as float 0/1 via getRawParameterValue)
std::atomic<float>* chorusOn{};
std::atomic<float>* delayOn{};
std::atomic<float>* reverbOn{};
std::atomic<float>* flangerOn{};
std::atomic<float>* distortionOn{};
std::atomic<float>* filterOn{};
std::atomic<float>* eqOn{};
// EQ + Master
std::atomic<float>* lowGainDbls{};
std::atomic<float>* midGainDbls{};
std::atomic<float>* highGainDbls{};
std::atomic<float>* masterDbls{};
};

View File

@@ -1,170 +1,170 @@
#include "PluginProcessor.h"
#include "PluginEditor.h"
#include "ScopeComponent.h"
//==============================================================================
NeuralSynthAudioProcessorEditor::NeuralSynthAudioProcessorEditor (NeuralSynthAudioProcessor& p)
: AudioProcessorEditor (&p),
audioProcessor (p),
mainScopeComponent(audioProcessor.getAudioBufferQueue())
{
auto& tree = audioProcessor.parameters;
addAndMakeVisible(mainScopeComponent);
waveformSelector.setModel(&waveformContents);
waveformContents.onSelect = [this](int row)
{
// write to the parameter so voices update safely
audioProcessor.parameters.getParameterAsValue("waveform") = (float)juce::jlimit(0, 3, row);
};
addAndMakeVisible(waveformSelector);
// --- Panels ---
adsrComponent.emplace(tree, "adsr", "Amp Env");
adsrComponent->enableGraphScope([this](float x) {
auto& tree = this->audioProcessor.parameters;
float A = tree.getParameter("adsr_attack")->getValue();
float D = tree.getParameter("adsr_decay")->getValue();
float S = tree.getParameter("adsr_sustain")->getValue();
float R = tree.getParameter("adsr_release")->getValue();
const float sustainLen = 1.0f;
const float total = A + D + sustainLen + R;
A /= total; D /= total; R /= total;
float m = 0.0f, c = 0.0f;
if (x < A) { m = 1.0f / A; c = 0.0f; }
else if (x < A + D) { m = (S - 1.0f) / D; c = 1.0f - m * A; }
else if (x < A + D + (sustainLen / total)) { m = 0.0f; c = S; }
else { m = (S / -R); c = -m; }
return m * x + c;
});
addAndMakeVisible(*adsrComponent);
chorusComponent.emplace(tree, "chorus", "Chorus");
chorusComponent->enableSampleScope(audioProcessor.getChorusAudioBufferQueue());
addAndMakeVisible(*chorusComponent);
delayComponent.emplace(tree, "delay", "Delay");
delayComponent->enableSampleScope(audioProcessor.getDelayAudioBufferQueue());
addAndMakeVisible(*delayComponent);
reverbComponent.emplace(tree, "reverb", "Reverb");
reverbComponent->enableSampleScope(audioProcessor.getReverbAudioBufferQueue());
addAndMakeVisible(*reverbComponent);
eqComponent.emplace(tree, "EQ");
addAndMakeVisible(*eqComponent);
flangerComponent.emplace(tree, "flanger", "Flanger");
flangerComponent->enableSampleScope(audioProcessor.getFlangerAudioBufferQueue());
addAndMakeVisible(*flangerComponent);
distortionComponent.emplace(tree, "distortion", "Distortion");
distortionComponent->enableSampleScope(audioProcessor.getDistortionAudioBufferQueue());
addAndMakeVisible(*distortionComponent);
filterComponent.emplace(tree, "filter", "Filter");
filterComponent->enableSampleScope(audioProcessor.getFilterAudioBufferQueue());
addAndMakeVisible(*filterComponent);
filterEnvComponent.emplace(tree, "fenv", "Filter Env");
filterEnvComponent->enableGraphScope([this](float x) {
auto& tree = this->audioProcessor.parameters;
float A = tree.getParameter("fenv_attack")->getValue();
float D = tree.getParameter("fenv_decay")->getValue();
float S = tree.getParameter("fenv_sustain")->getValue();
float R = tree.getParameter("fenv_release")->getValue();
const float sustainLen = 1.0f;
const float total = A + D + sustainLen + R;
A /= total; D /= total; R /= total;
float m = 0.0f, c = 0.0f;
if (x < A) { m = 1.0f / A; c = 0.0f; }
else if (x < A + D) { m = (S - 1.0f) / D; c = 1.0f - m * A; }
else if (x < A + D + (sustainLen / total)) { m = 0.0f; c = S; }
else { m = (S / -R); c = -m; }
return m * x + c;
});
addAndMakeVisible(*filterEnvComponent);
// Master fader + label
addAndMakeVisible(masterLevelSlider);
masterLevelLabel.setText("Master", juce::dontSendNotification);
{
juce::Font f; f.setHeight(12.0f); f.setBold(true);
masterLevelLabel.setFont(f);
}
masterLevelLabel.setJustificationType(juce::Justification::centred);
addAndMakeVisible(masterLevelLabel);
// Blank placeholder
addAndMakeVisible(blankPanel);
// Attach master parameter
gainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(
audioProcessor.parameters, "master", masterLevelSlider.slider);
setSize(1400, 720);
}
//==============================================================================
NeuralSynthAudioProcessorEditor::~NeuralSynthAudioProcessorEditor() = default;
//==============================================================================
void NeuralSynthAudioProcessorEditor::paint (juce::Graphics& g)
{
g.fillAll(getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
}
//==============================================================================
void NeuralSynthAudioProcessorEditor::resized()
{
auto bounds = getLocalBounds().reduced(16);
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(20)), // scope row
juce::Grid::TrackInfo(juce::Grid::Fr(40)), // row 1
juce::Grid::TrackInfo(juce::Grid::Fr(40)) // row 2
};
// 6 columns: 5 content + 1 sidebar (waveform+master)
grid.templateColumns = {
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(10))
};
// Row 0
grid.items.add(juce::GridItem(mainScopeComponent)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(5)));
grid.items.add(juce::GridItem(waveformSelector)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(1)));
// Row 1
grid.items.add(juce::GridItem(*adsrComponent));
grid.items.add(juce::GridItem(*chorusComponent));
grid.items.add(juce::GridItem(*delayComponent));
grid.items.add(juce::GridItem(*reverbComponent));
grid.items.add(juce::GridItem(*eqComponent));
grid.items.add(juce::GridItem(masterLevelLabel));
// Row 2
grid.items.add(juce::GridItem(*flangerComponent));
grid.items.add(juce::GridItem(*distortionComponent));
grid.items.add(juce::GridItem(*filterComponent));
grid.items.add(juce::GridItem(*filterEnvComponent));
grid.items.add(juce::GridItem(blankPanel));
grid.items.add(juce::GridItem(masterLevelSlider));
grid.performLayout(bounds);
}
#include "PluginProcessor.h"
#include "PluginEditor.h"
#include "ScopeComponent.h"
//==============================================================================
NeuralSynthAudioProcessorEditor::NeuralSynthAudioProcessorEditor (NeuralSynthAudioProcessor& p)
: AudioProcessorEditor (&p),
audioProcessor (p),
mainScopeComponent(audioProcessor.getAudioBufferQueue())
{
auto& tree = audioProcessor.parameters;
addAndMakeVisible(mainScopeComponent);
waveformSelector.setModel(&waveformContents);
waveformContents.onSelect = [this](int row)
{
// write to the parameter so voices update safely
audioProcessor.parameters.getParameterAsValue("waveform") = (float)juce::jlimit(0, 3, row);
};
addAndMakeVisible(waveformSelector);
// --- Panels ---
adsrComponent.emplace(tree, "adsr", "Amp Env");
adsrComponent->enableGraphScope([this](float x) {
auto& tree = this->audioProcessor.parameters;
float A = tree.getParameter("adsr_attack")->getValue();
float D = tree.getParameter("adsr_decay")->getValue();
float S = tree.getParameter("adsr_sustain")->getValue();
float R = tree.getParameter("adsr_release")->getValue();
const float sustainLen = 1.0f;
const float total = A + D + sustainLen + R;
A /= total; D /= total; R /= total;
float m = 0.0f, c = 0.0f;
if (x < A) { m = 1.0f / A; c = 0.0f; }
else if (x < A + D) { m = (S - 1.0f) / D; c = 1.0f - m * A; }
else if (x < A + D + (sustainLen / total)) { m = 0.0f; c = S; }
else { m = (S / -R); c = -m; }
return m * x + c;
});
addAndMakeVisible(*adsrComponent);
chorusComponent.emplace(tree, "chorus", "Chorus");
chorusComponent->enableSampleScope(audioProcessor.getChorusAudioBufferQueue());
addAndMakeVisible(*chorusComponent);
delayComponent.emplace(tree, "delay", "Delay");
delayComponent->enableSampleScope(audioProcessor.getDelayAudioBufferQueue());
addAndMakeVisible(*delayComponent);
reverbComponent.emplace(tree, "reverb", "Reverb");
reverbComponent->enableSampleScope(audioProcessor.getReverbAudioBufferQueue());
addAndMakeVisible(*reverbComponent);
eqComponent.emplace(tree, "EQ");
addAndMakeVisible(*eqComponent);
flangerComponent.emplace(tree, "flanger", "Flanger");
flangerComponent->enableSampleScope(audioProcessor.getFlangerAudioBufferQueue());
addAndMakeVisible(*flangerComponent);
distortionComponent.emplace(tree, "distortion", "Distortion");
distortionComponent->enableSampleScope(audioProcessor.getDistortionAudioBufferQueue());
addAndMakeVisible(*distortionComponent);
filterComponent.emplace(tree, "filter", "Filter");
filterComponent->enableSampleScope(audioProcessor.getFilterAudioBufferQueue());
addAndMakeVisible(*filterComponent);
filterEnvComponent.emplace(tree, "fenv", "Filter Env");
filterEnvComponent->enableGraphScope([this](float x) {
auto& tree = this->audioProcessor.parameters;
float A = tree.getParameter("fenv_attack")->getValue();
float D = tree.getParameter("fenv_decay")->getValue();
float S = tree.getParameter("fenv_sustain")->getValue();
float R = tree.getParameter("fenv_release")->getValue();
const float sustainLen = 1.0f;
const float total = A + D + sustainLen + R;
A /= total; D /= total; R /= total;
float m = 0.0f, c = 0.0f;
if (x < A) { m = 1.0f / A; c = 0.0f; }
else if (x < A + D) { m = (S - 1.0f) / D; c = 1.0f - m * A; }
else if (x < A + D + (sustainLen / total)) { m = 0.0f; c = S; }
else { m = (S / -R); c = -m; }
return m * x + c;
});
addAndMakeVisible(*filterEnvComponent);
// Master fader + label
addAndMakeVisible(masterLevelSlider);
masterLevelLabel.setText("Master", juce::dontSendNotification);
{
juce::Font f; f.setHeight(12.0f); f.setBold(true);
masterLevelLabel.setFont(f);
}
masterLevelLabel.setJustificationType(juce::Justification::centred);
addAndMakeVisible(masterLevelLabel);
// Blank placeholder
addAndMakeVisible(blankPanel);
// Attach master parameter
gainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(
audioProcessor.parameters, "master", masterLevelSlider.slider);
setSize(1400, 720);
}
//==============================================================================
NeuralSynthAudioProcessorEditor::~NeuralSynthAudioProcessorEditor() = default;
//==============================================================================
void NeuralSynthAudioProcessorEditor::paint (juce::Graphics& g)
{
g.fillAll(getLookAndFeel().findColour (juce::ResizableWindow::backgroundColourId));
}
//==============================================================================
void NeuralSynthAudioProcessorEditor::resized()
{
auto bounds = getLocalBounds().reduced(16);
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(20)), // scope row
juce::Grid::TrackInfo(juce::Grid::Fr(40)), // row 1
juce::Grid::TrackInfo(juce::Grid::Fr(40)) // row 2
};
// 6 columns: 5 content + 1 sidebar (waveform+master)
grid.templateColumns = {
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(18)),
juce::Grid::TrackInfo(juce::Grid::Fr(10))
};
// Row 0
grid.items.add(juce::GridItem(mainScopeComponent)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(5)));
grid.items.add(juce::GridItem(waveformSelector)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(1)));
// Row 1
grid.items.add(juce::GridItem(*adsrComponent));
grid.items.add(juce::GridItem(*chorusComponent));
grid.items.add(juce::GridItem(*delayComponent));
grid.items.add(juce::GridItem(*reverbComponent));
grid.items.add(juce::GridItem(*eqComponent));
grid.items.add(juce::GridItem(masterLevelLabel));
// Row 2
grid.items.add(juce::GridItem(*flangerComponent));
grid.items.add(juce::GridItem(*distortionComponent));
grid.items.add(juce::GridItem(*filterComponent));
grid.items.add(juce::GridItem(*filterEnvComponent));
grid.items.add(juce::GridItem(blankPanel));
grid.items.add(juce::GridItem(masterLevelSlider));
grid.performLayout(bounds);
}

View File

@@ -1,341 +1,341 @@
#pragma once
#include <JuceHeader.h>
#include "PluginProcessor.h"
#include "GraphComponent.h"
#include "ScopeComponent.h"
//============================== ScopeSliderComponent ==========================
// A generic panel: optional scope/graph + rotary sliders + labels.
// Adds a per-panel "On" toggle (bound to "<group>_on").
class ScopeSliderComponent : public juce::Component {
static const int fontSize = 11;
public:
ScopeSliderComponent(juce::AudioProcessorValueTreeState& tree,
const std::string paramGroup,
const juce::String& titleText = {})
: paramGroupId(paramGroup), treeRef(tree)
{
const auto& sliderDetails = PARAM_SETTINGS.at(paramGroup);
for (const auto& [name, sliderDetail] : sliderDetails) {
sliders.push_back(std::make_unique<juce::Slider>());
labels.push_back(std::make_unique<juce::Label>());
attachments.push_back(std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(
tree, paramGroup + "_" + name, *sliders.back()));
labels.back()->setText(sliderDetail.label, juce::dontSendNotification);
sliders.back()->setRange(sliderDetail.min, sliderDetail.max);
}
for (auto& slider : sliders)
{
slider->setSliderStyle(juce::Slider::Rotary);
slider->setTextBoxStyle(juce::Slider::TextBoxBelow, false, 50, 20);
addAndMakeVisible(*slider);
}
for (auto& label : labels)
{
juce::Font f; f.setHeight((float)fontSize); f.setBold(true);
label->setFont(f);
label->setColour(juce::Label::textColourId, juce::Colours::lightgreen);
label->setJustificationType(juce::Justification::centred);
addAndMakeVisible(*label);
}
if (titleText.isNotEmpty())
{
titleLabel.setText(titleText, juce::dontSendNotification);
juce::Font tf; tf.setHeight(12.0f); tf.setBold(true);
titleLabel.setFont(tf);
titleLabel.setJustificationType(juce::Justification::centredLeft);
titleLabel.setColour(juce::Label::textColourId, juce::Colours::white);
addAndMakeVisible(titleLabel);
}
// Bypass toggle (per panel), id "<group>_on"
bypassButton.setButtonText("On");
bypassButton.setClickingTogglesState(true);
addAndMakeVisible(bypassButton);
bypassAttachment = std::make_unique<juce::AudioProcessorValueTreeState::ButtonAttachment>(
treeRef, paramGroupId + "_on", bypassButton);
}
void enableSampleScope(AudioBufferQueue<float>& audioBufferQueue) {
scope.emplace(audioBufferQueue);
useGraphScope = false;
addAndMakeVisible(*scope);
}
void enableGraphScope(const std::function<float(float)>& func) {
graphScope.emplace(0.0f, 1.0f, 100);
graphScope->setFunction(func);
useGraphScope = true;
addAndMakeVisible(*graphScope);
}
private:
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::darkgrey);
g.setColour(juce::Colours::white);
g.drawRect(getLocalBounds());
}
void resized() override
{
// --- Top bar (manual) ----------------------------------------------
auto area = getLocalBounds().reduced(10);
auto top = area.removeFromTop(22);
auto btnW = 46;
bypassButton.setBounds(top.removeFromRight(btnW).reduced(2, 1));
titleLabel.setBounds(top);
// --- Rest (grid) ----------------------------------------------------
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(55)), // scope/graph
juce::Grid::TrackInfo(juce::Grid::Fr(30)), // sliders
juce::Grid::TrackInfo(juce::Grid::Fr(15)) // labels
};
const int n = (int)sliders.size();
grid.templateColumns.resize(n);
for (int i = 0; i < n; ++i)
grid.templateColumns.getReference(i) = juce::Grid::TrackInfo(juce::Grid::Fr(1));
grid.items.clear();
// Row 1: scope/graph only add if constructed
if (useGraphScope)
{
if (graphScope)
grid.items.add(juce::GridItem(*graphScope)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
else
grid.items.add(juce::GridItem()
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
}
else
{
if (scope)
grid.items.add(juce::GridItem(*scope)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
else
grid.items.add(juce::GridItem()
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
}
// Row 2: sliders
for (int i = 0; i < n; ++i)
grid.items.add(juce::GridItem(*sliders[(size_t)i]));
// Row 3: labels
for (int i = 0; i < n; ++i)
grid.items.add(juce::GridItem(*labels[(size_t)i]));
grid.performLayout(area);
}
bool useGraphScope{ false };
std::optional<ScopeComponent<float>> scope;
std::optional<GraphComponent<float>> graphScope;
std::vector<std::unique_ptr<juce::Slider>> sliders;
std::vector<std::unique_ptr<juce::Label>> labels;
std::vector<std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment>> attachments;
juce::ToggleButton bypassButton;
std::unique_ptr<juce::AudioProcessorValueTreeState::ButtonAttachment> bypassAttachment;
juce::Label titleLabel;
std::string paramGroupId;
juce::AudioProcessorValueTreeState& treeRef;
};
//============================== EqualizerComponent ============================
// Adds an On/Off toggle bound to "eq_on".
class EqualizerComponent : public juce::Component {
static const int fontSize = 11;
public:
explicit EqualizerComponent(juce::AudioProcessorValueTreeState& tree,
const juce::String& titleText = {})
{
setupSlider(lowGainSlider);
setupSlider(midGainSlider);
setupSlider(highGainSlider);
setupLabel(lowGainLabel, "L");
setupLabel(midGainLabel, "M");
setupLabel(highGainLabel, "H");
if (titleText.isNotEmpty())
{
titleLabel.setText(titleText, juce::dontSendNotification);
juce::Font tf; tf.setHeight(13.0f); tf.setBold(true);
titleLabel.setFont(tf);
titleLabel.setJustificationType(juce::Justification::centredLeft);
titleLabel.setColour(juce::Label::textColourId, juce::Colours::white);
addAndMakeVisible(titleLabel);
}
// Attachments
lowGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "lowEQ", lowGainSlider);
midGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "midEQ", midGainSlider);
highGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "highEQ", highGainSlider);
// EQ bypass toggle
bypassButton.setButtonText("On");
bypassButton.setClickingTogglesState(true);
addAndMakeVisible(bypassButton);
bypassAttachment = std::make_unique<juce::AudioProcessorValueTreeState::ButtonAttachment>(tree, "eq_on", bypassButton);
}
private:
void setupSlider(juce::Slider& slider) {
slider.setRange(-24.0f, 24.0f, 0.1f);
slider.setSliderStyle(juce::Slider::LinearBarVertical);
slider.setTextBoxStyle(juce::Slider::TextBoxBelow, false, 50, 20);
addAndMakeVisible(slider);
}
void setupLabel(juce::Label& lbl, juce::String txt) {
juce::Font f; f.setHeight((float)fontSize); f.setBold(true);
lbl.setFont(f);
lbl.setColour(juce::Label::textColourId, juce::Colours::lightgreen);
lbl.setJustificationType(juce::Justification::centred);
lbl.setText(txt, juce::dontSendNotification);
addAndMakeVisible(lbl);
}
void paint(juce::Graphics& g) override {
g.fillAll(juce::Colours::darkgrey);
g.setColour(juce::Colours::white);
g.drawRect(getLocalBounds());
}
void resized() override {
auto area = getLocalBounds().reduced(10);
auto top = area.removeFromTop(22);
auto btnW = 46;
bypassButton.setBounds(top.removeFromRight(btnW).reduced(2, 1));
titleLabel.setBounds(top);
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1))
};
grid.templateColumns = {
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1))
};
grid.items = {
lowGainSlider, midGainSlider, highGainSlider,
lowGainLabel, midGainLabel, highGainLabel
};
grid.performLayout(area);
}
juce::Slider lowGainSlider, midGainSlider, highGainSlider;
juce::Label lowGainLabel, midGainLabel, highGainLabel;
std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> lowGainAttachment, midGainAttachment, highGainAttachment;
juce::ToggleButton bypassButton;
std::unique_ptr<juce::AudioProcessorValueTreeState::ButtonAttachment> bypassAttachment;
juce::Label titleLabel;
};
//============================== Waveform List Model ===========================
struct WaveformSelectorContents final : public juce::ListBoxModel
{
int getNumRows() override { return 4; }
void paintListBoxItem(int rowNumber, juce::Graphics& g,
int width, int height, bool rowIsSelected) override
{
if (rowIsSelected) g.fillAll(juce::Colours::lightblue);
g.setColour(juce::LookAndFeel::getDefaultLookAndFeel()
.findColour(juce::Label::textColourId));
juce::Font f; f.setHeight((float)height * 0.7f);
g.setFont(f);
g.drawText(waves[(size_t)rowNumber], 5, 0, width, height,
juce::Justification::centredLeft, true);
}
void selectedRowsChanged (int lastRowSelected) override
{
if (onSelect) onSelect(lastRowSelected);
}
std::function<void (int)> onSelect;
std::vector<juce::String> waves { "Sine", "Saw", "Square", "Triangle" };
};
//============================== MasterVolumeComponent =========================
class MasterVolumeComponent : public juce::Component
{
public:
MasterVolumeComponent()
{
slider.setSliderStyle(juce::Slider::LinearBarVertical);
slider.setTextBoxStyle(juce::Slider::NoTextBox, false, 20, 20);
addAndMakeVisible(slider);
}
void resized() override
{
slider.setBounds(getLocalBounds().reduced(30));
}
juce::Slider slider;
};
//============================== Editor =======================================
class NeuralSynthAudioProcessorEditor : public juce::AudioProcessorEditor
{
public:
NeuralSynthAudioProcessorEditor (NeuralSynthAudioProcessor&);
~NeuralSynthAudioProcessorEditor() override;
void paint (juce::Graphics&) override;
void resized() override;
private:
NeuralSynthAudioProcessor& audioProcessor;
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthAudioProcessorEditor)
juce::ListBox waveformSelector;
WaveformSelectorContents waveformContents;
std::optional<ScopeSliderComponent> adsrComponent; // Amp Env
std::optional<ScopeSliderComponent> chorusComponent;
std::optional<ScopeSliderComponent> delayComponent;
std::optional<ScopeSliderComponent> reverbComponent;
std::optional<ScopeSliderComponent> flangerComponent;
std::optional<ScopeSliderComponent> distortionComponent;
std::optional<ScopeSliderComponent> filterComponent;
std::optional<ScopeSliderComponent> filterEnvComponent; // Filter Env panel
MasterVolumeComponent masterLevelSlider;
juce::Label masterLevelLabel;
std::optional<EqualizerComponent> eqComponent;
std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> gainAttachment;
ScopeComponent<float> mainScopeComponent;
juce::Component blankPanel;
};
#pragma once
#include <JuceHeader.h>
#include "PluginProcessor.h"
#include "GraphComponent.h"
#include "ScopeComponent.h"
//============================== ScopeSliderComponent ==========================
// A generic panel: optional scope/graph + rotary sliders + labels.
// Adds a per-panel "On" toggle (bound to "<group>_on").
class ScopeSliderComponent : public juce::Component {
static const int fontSize = 11;
public:
ScopeSliderComponent(juce::AudioProcessorValueTreeState& tree,
const std::string paramGroup,
const juce::String& titleText = {})
: paramGroupId(paramGroup), treeRef(tree)
{
const auto& sliderDetails = PARAM_SETTINGS.at(paramGroup);
for (const auto& [name, sliderDetail] : sliderDetails) {
sliders.push_back(std::make_unique<juce::Slider>());
labels.push_back(std::make_unique<juce::Label>());
attachments.push_back(std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(
tree, paramGroup + "_" + name, *sliders.back()));
labels.back()->setText(sliderDetail.label, juce::dontSendNotification);
sliders.back()->setRange(sliderDetail.min, sliderDetail.max);
}
for (auto& slider : sliders)
{
slider->setSliderStyle(juce::Slider::Rotary);
slider->setTextBoxStyle(juce::Slider::TextBoxBelow, false, 50, 20);
addAndMakeVisible(*slider);
}
for (auto& label : labels)
{
juce::Font f; f.setHeight((float)fontSize); f.setBold(true);
label->setFont(f);
label->setColour(juce::Label::textColourId, juce::Colours::lightgreen);
label->setJustificationType(juce::Justification::centred);
addAndMakeVisible(*label);
}
if (titleText.isNotEmpty())
{
titleLabel.setText(titleText, juce::dontSendNotification);
juce::Font tf; tf.setHeight(12.0f); tf.setBold(true);
titleLabel.setFont(tf);
titleLabel.setJustificationType(juce::Justification::centredLeft);
titleLabel.setColour(juce::Label::textColourId, juce::Colours::white);
addAndMakeVisible(titleLabel);
}
// Bypass toggle (per panel), id "<group>_on"
bypassButton.setButtonText("On");
bypassButton.setClickingTogglesState(true);
addAndMakeVisible(bypassButton);
bypassAttachment = std::make_unique<juce::AudioProcessorValueTreeState::ButtonAttachment>(
treeRef, paramGroupId + "_on", bypassButton);
}
void enableSampleScope(AudioBufferQueue<float>& audioBufferQueue) {
scope.emplace(audioBufferQueue);
useGraphScope = false;
addAndMakeVisible(*scope);
}
void enableGraphScope(const std::function<float(float)>& func) {
graphScope.emplace(0.0f, 1.0f, 100);
graphScope->setFunction(func);
useGraphScope = true;
addAndMakeVisible(*graphScope);
}
private:
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::darkgrey);
g.setColour(juce::Colours::white);
g.drawRect(getLocalBounds());
}
void resized() override
{
// --- Top bar (manual) ----------------------------------------------
auto area = getLocalBounds().reduced(10);
auto top = area.removeFromTop(22);
auto btnW = 46;
bypassButton.setBounds(top.removeFromRight(btnW).reduced(2, 1));
titleLabel.setBounds(top);
// --- Rest (grid) ----------------------------------------------------
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(55)), // scope/graph
juce::Grid::TrackInfo(juce::Grid::Fr(30)), // sliders
juce::Grid::TrackInfo(juce::Grid::Fr(15)) // labels
};
const int n = (int)sliders.size();
grid.templateColumns.resize(n);
for (int i = 0; i < n; ++i)
grid.templateColumns.getReference(i) = juce::Grid::TrackInfo(juce::Grid::Fr(1));
grid.items.clear();
// Row 1: scope/graph only add if constructed
if (useGraphScope)
{
if (graphScope)
grid.items.add(juce::GridItem(*graphScope)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
else
grid.items.add(juce::GridItem()
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
}
else
{
if (scope)
grid.items.add(juce::GridItem(*scope)
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
else
grid.items.add(juce::GridItem()
.withArea(juce::GridItem::Span(1), juce::GridItem::Span(n)));
}
// Row 2: sliders
for (int i = 0; i < n; ++i)
grid.items.add(juce::GridItem(*sliders[(size_t)i]));
// Row 3: labels
for (int i = 0; i < n; ++i)
grid.items.add(juce::GridItem(*labels[(size_t)i]));
grid.performLayout(area);
}
bool useGraphScope{ false };
std::optional<ScopeComponent<float>> scope;
std::optional<GraphComponent<float>> graphScope;
std::vector<std::unique_ptr<juce::Slider>> sliders;
std::vector<std::unique_ptr<juce::Label>> labels;
std::vector<std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment>> attachments;
juce::ToggleButton bypassButton;
std::unique_ptr<juce::AudioProcessorValueTreeState::ButtonAttachment> bypassAttachment;
juce::Label titleLabel;
std::string paramGroupId;
juce::AudioProcessorValueTreeState& treeRef;
};
//============================== EqualizerComponent ============================
// Adds an On/Off toggle bound to "eq_on".
class EqualizerComponent : public juce::Component {
static const int fontSize = 11;
public:
explicit EqualizerComponent(juce::AudioProcessorValueTreeState& tree,
const juce::String& titleText = {})
{
setupSlider(lowGainSlider);
setupSlider(midGainSlider);
setupSlider(highGainSlider);
setupLabel(lowGainLabel, "L");
setupLabel(midGainLabel, "M");
setupLabel(highGainLabel, "H");
if (titleText.isNotEmpty())
{
titleLabel.setText(titleText, juce::dontSendNotification);
juce::Font tf; tf.setHeight(13.0f); tf.setBold(true);
titleLabel.setFont(tf);
titleLabel.setJustificationType(juce::Justification::centredLeft);
titleLabel.setColour(juce::Label::textColourId, juce::Colours::white);
addAndMakeVisible(titleLabel);
}
// Attachments
lowGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "lowEQ", lowGainSlider);
midGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "midEQ", midGainSlider);
highGainAttachment = std::make_unique<juce::AudioProcessorValueTreeState::SliderAttachment>(tree, "highEQ", highGainSlider);
// EQ bypass toggle
bypassButton.setButtonText("On");
bypassButton.setClickingTogglesState(true);
addAndMakeVisible(bypassButton);
bypassAttachment = std::make_unique<juce::AudioProcessorValueTreeState::ButtonAttachment>(tree, "eq_on", bypassButton);
}
private:
void setupSlider(juce::Slider& slider) {
slider.setRange(-24.0f, 24.0f, 0.1f);
slider.setSliderStyle(juce::Slider::LinearBarVertical);
slider.setTextBoxStyle(juce::Slider::TextBoxBelow, false, 50, 20);
addAndMakeVisible(slider);
}
void setupLabel(juce::Label& lbl, juce::String txt) {
juce::Font f; f.setHeight((float)fontSize); f.setBold(true);
lbl.setFont(f);
lbl.setColour(juce::Label::textColourId, juce::Colours::lightgreen);
lbl.setJustificationType(juce::Justification::centred);
lbl.setText(txt, juce::dontSendNotification);
addAndMakeVisible(lbl);
}
void paint(juce::Graphics& g) override {
g.fillAll(juce::Colours::darkgrey);
g.setColour(juce::Colours::white);
g.drawRect(getLocalBounds());
}
void resized() override {
auto area = getLocalBounds().reduced(10);
auto top = area.removeFromTop(22);
auto btnW = 46;
bypassButton.setBounds(top.removeFromRight(btnW).reduced(2, 1));
titleLabel.setBounds(top);
juce::Grid grid;
grid.templateRows = {
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1))
};
grid.templateColumns = {
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1)),
juce::Grid::TrackInfo(juce::Grid::Fr(1))
};
grid.items = {
lowGainSlider, midGainSlider, highGainSlider,
lowGainLabel, midGainLabel, highGainLabel
};
grid.performLayout(area);
}
juce::Slider lowGainSlider, midGainSlider, highGainSlider;
juce::Label lowGainLabel, midGainLabel, highGainLabel;
std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> lowGainAttachment, midGainAttachment, highGainAttachment;
juce::ToggleButton bypassButton;
std::unique_ptr<juce::AudioProcessorValueTreeState::ButtonAttachment> bypassAttachment;
juce::Label titleLabel;
};
//============================== Waveform List Model ===========================
struct WaveformSelectorContents final : public juce::ListBoxModel
{
int getNumRows() override { return 4; }
void paintListBoxItem(int rowNumber, juce::Graphics& g,
int width, int height, bool rowIsSelected) override
{
if (rowIsSelected) g.fillAll(juce::Colours::lightblue);
g.setColour(juce::LookAndFeel::getDefaultLookAndFeel()
.findColour(juce::Label::textColourId));
juce::Font f; f.setHeight((float)height * 0.7f);
g.setFont(f);
g.drawText(waves[(size_t)rowNumber], 5, 0, width, height,
juce::Justification::centredLeft, true);
}
void selectedRowsChanged (int lastRowSelected) override
{
if (onSelect) onSelect(lastRowSelected);
}
std::function<void (int)> onSelect;
std::vector<juce::String> waves { "Sine", "Saw", "Square", "Triangle" };
};
//============================== MasterVolumeComponent =========================
class MasterVolumeComponent : public juce::Component
{
public:
MasterVolumeComponent()
{
slider.setSliderStyle(juce::Slider::LinearBarVertical);
slider.setTextBoxStyle(juce::Slider::NoTextBox, false, 20, 20);
addAndMakeVisible(slider);
}
void resized() override
{
slider.setBounds(getLocalBounds().reduced(30));
}
juce::Slider slider;
};
//============================== Editor =======================================
class NeuralSynthAudioProcessorEditor : public juce::AudioProcessorEditor
{
public:
NeuralSynthAudioProcessorEditor (NeuralSynthAudioProcessor&);
~NeuralSynthAudioProcessorEditor() override;
void paint (juce::Graphics&) override;
void resized() override;
private:
NeuralSynthAudioProcessor& audioProcessor;
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthAudioProcessorEditor)
juce::ListBox waveformSelector;
WaveformSelectorContents waveformContents;
std::optional<ScopeSliderComponent> adsrComponent; // Amp Env
std::optional<ScopeSliderComponent> chorusComponent;
std::optional<ScopeSliderComponent> delayComponent;
std::optional<ScopeSliderComponent> reverbComponent;
std::optional<ScopeSliderComponent> flangerComponent;
std::optional<ScopeSliderComponent> distortionComponent;
std::optional<ScopeSliderComponent> filterComponent;
std::optional<ScopeSliderComponent> filterEnvComponent; // Filter Env panel
MasterVolumeComponent masterLevelSlider;
juce::Label masterLevelLabel;
std::optional<EqualizerComponent> eqComponent;
std::unique_ptr<juce::AudioProcessorValueTreeState::SliderAttachment> gainAttachment;
ScopeComponent<float> mainScopeComponent;
juce::Component blankPanel;
};

View File

@@ -1,270 +1,270 @@
#include "PluginProcessor.h"
#include "PluginEditor.h"
//==============================================================================
NeuralSynthAudioProcessor::NeuralSynthAudioProcessor()
: parameters(*this, nullptr, "PARAMETERS", createParameterLayout())
, AudioProcessor(BusesProperties().withOutput("Output", juce::AudioChannelSet::stereo(), true))
, audioEngine(sp)
{
parameters.addParameterListener("waveform", this);
// === Per-panel bypass (default OFF) ===
sp.chorusOn = parameters.getRawParameterValue("chorus_on");
sp.delayOn = parameters.getRawParameterValue("delay_on");
sp.reverbOn = parameters.getRawParameterValue("reverb_on");
sp.flangerOn = parameters.getRawParameterValue("flanger_on");
sp.distortionOn = parameters.getRawParameterValue("distortion_on");
sp.filterOn = parameters.getRawParameterValue("filter_on");
sp.eqOn = parameters.getRawParameterValue("eq_on");
// === Chorus ===
parameters.addParameterListener("chorus_rate", this);
parameters.addParameterListener("chorus_depth", this);
parameters.addParameterListener("chorus_centre", this);
parameters.addParameterListener("chorus_feedback", this);
parameters.addParameterListener("chorus_mix", this);
sp.chorusRate = parameters.getRawParameterValue("chorus_rate");
sp.chorusDepth = parameters.getRawParameterValue("chorus_depth");
sp.chorusCentre = parameters.getRawParameterValue("chorus_centre");
sp.chorusFeedback = parameters.getRawParameterValue("chorus_feedback");
sp.chorusMix = parameters.getRawParameterValue("chorus_mix");
// === Delay ===
parameters.addParameterListener("delay_delay", this);
sp.delayTime = parameters.getRawParameterValue("delay_delay");
// === Reverb ===
parameters.addParameterListener("reverb_roomSize", this);
parameters.addParameterListener("reverb_damping", this);
parameters.addParameterListener("reverb_wetLevel", this);
parameters.addParameterListener("reverb_dryLevel", this);
parameters.addParameterListener("reverb_width", this);
parameters.addParameterListener("reverb_freezeMode", this);
sp.reverbRoomSize = parameters.getRawParameterValue("reverb_roomSize");
sp.reverbDamping = parameters.getRawParameterValue("reverb_damping");
sp.reverbWetLevel = parameters.getRawParameterValue("reverb_wetLevel");
sp.reverbDryLevel = parameters.getRawParameterValue("reverb_dryLevel");
sp.reverbWidth = parameters.getRawParameterValue("reverb_width");
sp.reverbFreezeMode= parameters.getRawParameterValue("reverb_freezeMode");
// === Amp ADSR ===
parameters.addParameterListener("adsr_attack", this);
parameters.addParameterListener("adsr_decay", this);
parameters.addParameterListener("adsr_sustain", this);
parameters.addParameterListener("adsr_release", this);
sp.adsrAttack = parameters.getRawParameterValue("adsr_attack");
sp.adsrDecay = parameters.getRawParameterValue("adsr_decay");
sp.adsrSustain = parameters.getRawParameterValue("adsr_sustain");
sp.adsrRelease = parameters.getRawParameterValue("adsr_release");
// === Filter Env ===
parameters.addParameterListener("fenv_attack", this);
parameters.addParameterListener("fenv_decay", this);
parameters.addParameterListener("fenv_sustain", this);
parameters.addParameterListener("fenv_release", this);
parameters.addParameterListener("fenv_amount", this);
sp.fenvAttack = parameters.getRawParameterValue("fenv_attack");
sp.fenvDecay = parameters.getRawParameterValue("fenv_decay");
sp.fenvSustain = parameters.getRawParameterValue("fenv_sustain");
sp.fenvRelease = parameters.getRawParameterValue("fenv_release");
sp.fenvAmount = parameters.getRawParameterValue("fenv_amount");
// === Filter base ===
parameters.addParameterListener("filter_cutoff", this);
parameters.addParameterListener("filter_resonance", this);
parameters.addParameterListener("filter_type", this);
parameters.addParameterListener("filter_drive", this);
parameters.addParameterListener("filter_mod", this);
parameters.addParameterListener("filter_key", this);
sp.filterCutoff = parameters.getRawParameterValue("filter_cutoff");
sp.filterResonance = parameters.getRawParameterValue("filter_resonance");
sp.filterType = parameters.getRawParameterValue("filter_type");
sp.filterDrive = parameters.getRawParameterValue("filter_drive");
sp.filterMod = parameters.getRawParameterValue("filter_mod");
sp.filterKey = parameters.getRawParameterValue("filter_key");
// === Distortion ===
parameters.addParameterListener("distortion_drive", this);
parameters.addParameterListener("distortion_mix", this);
parameters.addParameterListener("distortion_bias", this);
parameters.addParameterListener("distortion_tone", this);
parameters.addParameterListener("distortion_shape", this);
sp.distortionDrive = parameters.getRawParameterValue("distortion_drive");
sp.distortionMix = parameters.getRawParameterValue("distortion_mix");
sp.distortionBias = parameters.getRawParameterValue("distortion_bias");
sp.distortionTone = parameters.getRawParameterValue("distortion_tone");
sp.distortionShape = parameters.getRawParameterValue("distortion_shape");
// === Master / EQ ===
parameters.addParameterListener("master", this);
parameters.addParameterListener("lowEQ", this);
parameters.addParameterListener("midEQ", this);
parameters.addParameterListener("highEQ", this);
sp.masterDbls = parameters.getRawParameterValue("master");
sp.lowGainDbls = parameters.getRawParameterValue("lowEQ");
sp.midGainDbls = parameters.getRawParameterValue("midEQ");
sp.highGainDbls = parameters.getRawParameterValue("highEQ");
}
NeuralSynthAudioProcessor::~NeuralSynthAudioProcessor() = default;
//==============================================================================
const juce::String NeuralSynthAudioProcessor::getName() const { return JucePlugin_Name; }
bool NeuralSynthAudioProcessor::acceptsMidi() const
{
#if JucePlugin_WantsMidiInput
return true;
#else
return false;
#endif
}
bool NeuralSynthAudioProcessor::producesMidi() const
{
#if JucePlugin_ProducesMidiOutput
return true;
#else
return false;
#endif
}
bool NeuralSynthAudioProcessor::isMidiEffect() const
{
#if JucePlugin_IsMidiEffect
return true;
#else
return false;
#endif
}
double NeuralSynthAudioProcessor::getTailLengthSeconds() const { return 0.0; }
int NeuralSynthAudioProcessor::getNumPrograms() { return 1; }
int NeuralSynthAudioProcessor::getCurrentProgram() { return 0; }
void NeuralSynthAudioProcessor::setCurrentProgram (int) {}
const juce::String NeuralSynthAudioProcessor::getProgramName (int) { return {}; }
void NeuralSynthAudioProcessor::changeProgramName (int, const juce::String&) {}
//==============================================================================
void NeuralSynthAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{
audioEngine.prepare({ sampleRate, (juce::uint32)samplesPerBlock, 2 });
midiMessageCollector.reset(sampleRate);
}
void NeuralSynthAudioProcessor::releaseResources() {}
bool NeuralSynthAudioProcessor::isBusesLayoutSupported (const BusesLayout& layouts) const
{
if (layouts.getMainOutputChannelSet() != juce::AudioChannelSet::mono()
&& layouts.getMainOutputChannelSet() != juce::AudioChannelSet::stereo())
return false;
return true;
}
void NeuralSynthAudioProcessor::processBlock(juce::AudioSampleBuffer& buffer, juce::MidiBuffer& midiMessages)
{
const int newWaveform = sp.waveform.exchange(-1);
if (newWaveform != -1) {
audioEngine.applyToVoices([newWaveform](NeuralSynthVoice* v)
{
v->changeWaveform(newWaveform);
});
}
juce::ScopedNoDenormals noDenormals;
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
midiMessageCollector.removeNextBlockOfMessages(midiMessages, buffer.getNumSamples());
for (int i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
buffer.clear(i, 0, buffer.getNumSamples());
audioEngine.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
scopeDataCollector.process(buffer.getReadPointer(0), (size_t)buffer.getNumSamples());
}
//==============================================================================
bool NeuralSynthAudioProcessor::hasEditor() const { return true; }
juce::AudioProcessorEditor* NeuralSynthAudioProcessor::createEditor()
{
return new NeuralSynthAudioProcessorEditor (*this);
}
//==============================================================================
void NeuralSynthAudioProcessor::getStateInformation (juce::MemoryBlock& destData) { juce::ignoreUnused(destData); }
void NeuralSynthAudioProcessor::setStateInformation (const void* data, int sizeInBytes) { juce::ignoreUnused(data, sizeInBytes); }
void NeuralSynthAudioProcessor::parameterChanged(const juce::String& id, float newValue)
{
juce::ignoreUnused(newValue);
if (id == "waveform")
sp.waveform.store((int)newValue, std::memory_order_release);
}
//==============================================================================
// This creates new instances of the plugin..
juce::AudioProcessor* JUCE_CALLTYPE createPluginFilter() { return new NeuralSynthAudioProcessor(); }
void NeuralSynthAudioProcessor::buildParams(std::vector<std::unique_ptr<juce::RangedAudioParameter>>& params, const std::string& paramGroup) {
const auto& paramGroupSettings = PARAM_SETTINGS.at(paramGroup);
for (const auto& [name, s] : paramGroupSettings) {
params.push_back(std::make_unique<juce::AudioParameterFloat>(
paramGroup + "_" + name, s.label,
juce::NormalisableRange<float>(s.min, s.max, s.interval),
s.defValue));
}
}
juce::AudioProcessorValueTreeState::ParameterLayout NeuralSynthAudioProcessor::createParameterLayout()
{
std::vector<std::unique_ptr<juce::RangedAudioParameter>> params;
params.push_back(std::make_unique<juce::AudioParameterChoice>(
"waveform", "Waveform",
juce::StringArray{ "Sine", "Saw", "Square", "Triangle" }, 0));
// Per-panel bypass toggles (default OFF)
params.push_back(std::make_unique<juce::AudioParameterBool>("chorus_on", "Chorus On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("delay_on", "Delay On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("reverb_on", "Reverb On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("flanger_on", "Flanger On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("distortion_on", "Distortion On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("filter_on", "Filter On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("eq_on", "EQ On", false));
buildParams(params, "adsr");
buildParams(params, "fenv");
buildParams(params, "chorus");
buildParams(params, "delay");
buildParams(params, "reverb");
buildParams(params, "flanger");
buildParams(params, "distortion");
buildParams(params, "filter");
params.push_back(std::make_unique<juce::AudioParameterFloat>("master", "Master",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.1f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("lowEQ", "Low Gain",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.5f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("midEQ", "Mid EQ",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.8f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("highEQ", "High EQ",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 1.0f));
return { params.begin(), params.end() };
}
#include "PluginProcessor.h"
#include "PluginEditor.h"
//==============================================================================
NeuralSynthAudioProcessor::NeuralSynthAudioProcessor()
: parameters(*this, nullptr, "PARAMETERS", createParameterLayout())
, AudioProcessor(BusesProperties().withOutput("Output", juce::AudioChannelSet::stereo(), true))
, audioEngine(sp)
{
parameters.addParameterListener("waveform", this);
// === Per-panel bypass (default OFF) ===
sp.chorusOn = parameters.getRawParameterValue("chorus_on");
sp.delayOn = parameters.getRawParameterValue("delay_on");
sp.reverbOn = parameters.getRawParameterValue("reverb_on");
sp.flangerOn = parameters.getRawParameterValue("flanger_on");
sp.distortionOn = parameters.getRawParameterValue("distortion_on");
sp.filterOn = parameters.getRawParameterValue("filter_on");
sp.eqOn = parameters.getRawParameterValue("eq_on");
// === Chorus ===
parameters.addParameterListener("chorus_rate", this);
parameters.addParameterListener("chorus_depth", this);
parameters.addParameterListener("chorus_centre", this);
parameters.addParameterListener("chorus_feedback", this);
parameters.addParameterListener("chorus_mix", this);
sp.chorusRate = parameters.getRawParameterValue("chorus_rate");
sp.chorusDepth = parameters.getRawParameterValue("chorus_depth");
sp.chorusCentre = parameters.getRawParameterValue("chorus_centre");
sp.chorusFeedback = parameters.getRawParameterValue("chorus_feedback");
sp.chorusMix = parameters.getRawParameterValue("chorus_mix");
// === Delay ===
parameters.addParameterListener("delay_delay", this);
sp.delayTime = parameters.getRawParameterValue("delay_delay");
// === Reverb ===
parameters.addParameterListener("reverb_roomSize", this);
parameters.addParameterListener("reverb_damping", this);
parameters.addParameterListener("reverb_wetLevel", this);
parameters.addParameterListener("reverb_dryLevel", this);
parameters.addParameterListener("reverb_width", this);
parameters.addParameterListener("reverb_freezeMode", this);
sp.reverbRoomSize = parameters.getRawParameterValue("reverb_roomSize");
sp.reverbDamping = parameters.getRawParameterValue("reverb_damping");
sp.reverbWetLevel = parameters.getRawParameterValue("reverb_wetLevel");
sp.reverbDryLevel = parameters.getRawParameterValue("reverb_dryLevel");
sp.reverbWidth = parameters.getRawParameterValue("reverb_width");
sp.reverbFreezeMode= parameters.getRawParameterValue("reverb_freezeMode");
// === Amp ADSR ===
parameters.addParameterListener("adsr_attack", this);
parameters.addParameterListener("adsr_decay", this);
parameters.addParameterListener("adsr_sustain", this);
parameters.addParameterListener("adsr_release", this);
sp.adsrAttack = parameters.getRawParameterValue("adsr_attack");
sp.adsrDecay = parameters.getRawParameterValue("adsr_decay");
sp.adsrSustain = parameters.getRawParameterValue("adsr_sustain");
sp.adsrRelease = parameters.getRawParameterValue("adsr_release");
// === Filter Env ===
parameters.addParameterListener("fenv_attack", this);
parameters.addParameterListener("fenv_decay", this);
parameters.addParameterListener("fenv_sustain", this);
parameters.addParameterListener("fenv_release", this);
parameters.addParameterListener("fenv_amount", this);
sp.fenvAttack = parameters.getRawParameterValue("fenv_attack");
sp.fenvDecay = parameters.getRawParameterValue("fenv_decay");
sp.fenvSustain = parameters.getRawParameterValue("fenv_sustain");
sp.fenvRelease = parameters.getRawParameterValue("fenv_release");
sp.fenvAmount = parameters.getRawParameterValue("fenv_amount");
// === Filter base ===
parameters.addParameterListener("filter_cutoff", this);
parameters.addParameterListener("filter_resonance", this);
parameters.addParameterListener("filter_type", this);
parameters.addParameterListener("filter_drive", this);
parameters.addParameterListener("filter_mod", this);
parameters.addParameterListener("filter_key", this);
sp.filterCutoff = parameters.getRawParameterValue("filter_cutoff");
sp.filterResonance = parameters.getRawParameterValue("filter_resonance");
sp.filterType = parameters.getRawParameterValue("filter_type");
sp.filterDrive = parameters.getRawParameterValue("filter_drive");
sp.filterMod = parameters.getRawParameterValue("filter_mod");
sp.filterKey = parameters.getRawParameterValue("filter_key");
// === Distortion ===
parameters.addParameterListener("distortion_drive", this);
parameters.addParameterListener("distortion_mix", this);
parameters.addParameterListener("distortion_bias", this);
parameters.addParameterListener("distortion_tone", this);
parameters.addParameterListener("distortion_shape", this);
sp.distortionDrive = parameters.getRawParameterValue("distortion_drive");
sp.distortionMix = parameters.getRawParameterValue("distortion_mix");
sp.distortionBias = parameters.getRawParameterValue("distortion_bias");
sp.distortionTone = parameters.getRawParameterValue("distortion_tone");
sp.distortionShape = parameters.getRawParameterValue("distortion_shape");
// === Master / EQ ===
parameters.addParameterListener("master", this);
parameters.addParameterListener("lowEQ", this);
parameters.addParameterListener("midEQ", this);
parameters.addParameterListener("highEQ", this);
sp.masterDbls = parameters.getRawParameterValue("master");
sp.lowGainDbls = parameters.getRawParameterValue("lowEQ");
sp.midGainDbls = parameters.getRawParameterValue("midEQ");
sp.highGainDbls = parameters.getRawParameterValue("highEQ");
}
NeuralSynthAudioProcessor::~NeuralSynthAudioProcessor() = default;
//==============================================================================
const juce::String NeuralSynthAudioProcessor::getName() const { return JucePlugin_Name; }
bool NeuralSynthAudioProcessor::acceptsMidi() const
{
#if JucePlugin_WantsMidiInput
return true;
#else
return false;
#endif
}
bool NeuralSynthAudioProcessor::producesMidi() const
{
#if JucePlugin_ProducesMidiOutput
return true;
#else
return false;
#endif
}
bool NeuralSynthAudioProcessor::isMidiEffect() const
{
#if JucePlugin_IsMidiEffect
return true;
#else
return false;
#endif
}
double NeuralSynthAudioProcessor::getTailLengthSeconds() const { return 0.0; }
int NeuralSynthAudioProcessor::getNumPrograms() { return 1; }
int NeuralSynthAudioProcessor::getCurrentProgram() { return 0; }
void NeuralSynthAudioProcessor::setCurrentProgram (int) {}
const juce::String NeuralSynthAudioProcessor::getProgramName (int) { return {}; }
void NeuralSynthAudioProcessor::changeProgramName (int, const juce::String&) {}
//==============================================================================
void NeuralSynthAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)
{
audioEngine.prepare({ sampleRate, (juce::uint32)samplesPerBlock, 2 });
midiMessageCollector.reset(sampleRate);
}
void NeuralSynthAudioProcessor::releaseResources() {}
bool NeuralSynthAudioProcessor::isBusesLayoutSupported (const BusesLayout& layouts) const
{
if (layouts.getMainOutputChannelSet() != juce::AudioChannelSet::mono()
&& layouts.getMainOutputChannelSet() != juce::AudioChannelSet::stereo())
return false;
return true;
}
void NeuralSynthAudioProcessor::processBlock(juce::AudioSampleBuffer& buffer, juce::MidiBuffer& midiMessages)
{
const int newWaveform = sp.waveform.exchange(-1);
if (newWaveform != -1) {
audioEngine.applyToVoices([newWaveform](NeuralSynthVoice* v)
{
v->changeWaveform(newWaveform);
});
}
juce::ScopedNoDenormals noDenormals;
auto totalNumInputChannels = getTotalNumInputChannels();
auto totalNumOutputChannels = getTotalNumOutputChannels();
midiMessageCollector.removeNextBlockOfMessages(midiMessages, buffer.getNumSamples());
for (int i = totalNumInputChannels; i < totalNumOutputChannels; ++i)
buffer.clear(i, 0, buffer.getNumSamples());
audioEngine.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());
scopeDataCollector.process(buffer.getReadPointer(0), (size_t)buffer.getNumSamples());
}
//==============================================================================
bool NeuralSynthAudioProcessor::hasEditor() const { return true; }
juce::AudioProcessorEditor* NeuralSynthAudioProcessor::createEditor()
{
return new NeuralSynthAudioProcessorEditor (*this);
}
//==============================================================================
void NeuralSynthAudioProcessor::getStateInformation (juce::MemoryBlock& destData) { juce::ignoreUnused(destData); }
void NeuralSynthAudioProcessor::setStateInformation (const void* data, int sizeInBytes) { juce::ignoreUnused(data, sizeInBytes); }
void NeuralSynthAudioProcessor::parameterChanged(const juce::String& id, float newValue)
{
juce::ignoreUnused(newValue);
if (id == "waveform")
sp.waveform.store((int)newValue, std::memory_order_release);
}
//==============================================================================
// This creates new instances of the plugin..
juce::AudioProcessor* JUCE_CALLTYPE createPluginFilter() { return new NeuralSynthAudioProcessor(); }
void NeuralSynthAudioProcessor::buildParams(std::vector<std::unique_ptr<juce::RangedAudioParameter>>& params, const std::string& paramGroup) {
const auto& paramGroupSettings = PARAM_SETTINGS.at(paramGroup);
for (const auto& [name, s] : paramGroupSettings) {
params.push_back(std::make_unique<juce::AudioParameterFloat>(
paramGroup + "_" + name, s.label,
juce::NormalisableRange<float>(s.min, s.max, s.interval),
s.defValue));
}
}
juce::AudioProcessorValueTreeState::ParameterLayout NeuralSynthAudioProcessor::createParameterLayout()
{
std::vector<std::unique_ptr<juce::RangedAudioParameter>> params;
params.push_back(std::make_unique<juce::AudioParameterChoice>(
"waveform", "Waveform",
juce::StringArray{ "Sine", "Saw", "Square", "Triangle" }, 0));
// Per-panel bypass toggles (default OFF)
params.push_back(std::make_unique<juce::AudioParameterBool>("chorus_on", "Chorus On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("delay_on", "Delay On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("reverb_on", "Reverb On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("flanger_on", "Flanger On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("distortion_on", "Distortion On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("filter_on", "Filter On", false));
params.push_back(std::make_unique<juce::AudioParameterBool>("eq_on", "EQ On", false));
buildParams(params, "adsr");
buildParams(params, "fenv");
buildParams(params, "chorus");
buildParams(params, "delay");
buildParams(params, "reverb");
buildParams(params, "flanger");
buildParams(params, "distortion");
buildParams(params, "filter");
params.push_back(std::make_unique<juce::AudioParameterFloat>("master", "Master",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.1f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("lowEQ", "Low Gain",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.5f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("midEQ", "Mid EQ",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 0.8f));
params.push_back(std::make_unique<juce::AudioParameterFloat>("highEQ", "High EQ",
juce::NormalisableRange<float>(-24.0f, 24.0f, 0.1f), 1.0f));
return { params.begin(), params.end() };
}

View File

@@ -1,90 +1,90 @@
#pragma once
#include <JuceHeader.h>
#include "AudioBufferQueue.h"
#include "AudioEngine.h"
#include "ScopeDataCollector.h"
#include "NeuralSharedParams.h"
//==============================================================================
// Processor
class NeuralSynthAudioProcessor : public juce::AudioProcessor,
private juce::AudioProcessorValueTreeState::Listener
{
public:
NeuralSynthAudioProcessor();
~NeuralSynthAudioProcessor() override;
// AudioProcessor overrides
void prepareToPlay(double sampleRate, int samplesPerBlock) override;
void releaseResources() override;
#ifndef JucePlugin_PreferredChannelConfigurations
bool isBusesLayoutSupported(const BusesLayout& layouts) const override;
#endif
void processBlock(juce::AudioBuffer<float>&, juce::MidiBuffer&) override;
// Editor
juce::AudioProcessorEditor* createEditor() override;
bool hasEditor() const override;
// Info
const juce::String getName() const override;
bool acceptsMidi() const override;
bool producesMidi() const override;
bool isMidiEffect() const override;
double getTailLengthSeconds() const override;
// Programs
int getNumPrograms() override;
int getCurrentProgram() override;
void setCurrentProgram(int index) override;
const juce::String getProgramName(int index) override;
void changeProgramName(int index, const juce::String& newName) override;
// State
void getStateInformation(juce::MemoryBlock& destData) override;
void setStateInformation(const void* data, int sizeInBytes) override;
// Parameters
void parameterChanged(const juce::String& id, float newValue) override;
void buildParams(std::vector<std::unique_ptr<juce::RangedAudioParameter>>& params,
const std::string& paramGroup);
juce::AudioProcessorValueTreeState::ParameterLayout createParameterLayout();
// Utilities
juce::MidiMessageCollector& getMidiMessageCollector() noexcept { return midiMessageCollector; }
AudioBufferQueue<float>& getAudioBufferQueue() noexcept { return audioBufferQueue; }
AudioBufferQueue<float>& getChorusAudioBufferQueue() noexcept { return chorusBufferQueue; }
AudioBufferQueue<float>& getDelayAudioBufferQueue() noexcept { return delayBufferQueue; }
AudioBufferQueue<float>& getReverbAudioBufferQueue() noexcept { return reverbBufferQueue; }
AudioBufferQueue<float>& getFlangerAudioBufferQueue() noexcept { return flangerBufferQueue; }
AudioBufferQueue<float>& getDistortionAudioBufferQueue() noexcept { return distortionBufferQueue; }
AudioBufferQueue<float>& getFilterAudioBufferQueue() noexcept { return filterBufferQueue; }
// Public members (by JUCE convention)
juce::MidiMessageCollector midiMessageCollector;
juce::AudioProcessorValueTreeState parameters;
private:
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthAudioProcessor)
// ---- IMPORTANT ORDER FIX ----
// Objects are constructed in THIS order. 'sp' must come BEFORE audioEngine.
NeuralSharedParams sp; // <— construct first
NeuralAudioEngine audioEngine; // needs a valid reference to 'sp'
// Meter/scope queues
AudioBufferQueue<float> audioBufferQueue;
AudioBufferQueue<float> chorusBufferQueue;
AudioBufferQueue<float> delayBufferQueue;
AudioBufferQueue<float> reverbBufferQueue;
AudioBufferQueue<float> flangerBufferQueue;
AudioBufferQueue<float> distortionBufferQueue;
AudioBufferQueue<float> filterBufferQueue;
// Scope collector (uses audioBufferQueue, so declare after it)
ScopeDataCollector<float> scopeDataCollector { audioBufferQueue };
};
#pragma once
#include <JuceHeader.h>
#include "AudioBufferQueue.h"
#include "AudioEngine.h"
#include "ScopeDataCollector.h"
#include "NeuralSharedParams.h"
//==============================================================================
// Processor
class NeuralSynthAudioProcessor : public juce::AudioProcessor,
private juce::AudioProcessorValueTreeState::Listener
{
public:
NeuralSynthAudioProcessor();
~NeuralSynthAudioProcessor() override;
// AudioProcessor overrides
void prepareToPlay(double sampleRate, int samplesPerBlock) override;
void releaseResources() override;
#ifndef JucePlugin_PreferredChannelConfigurations
bool isBusesLayoutSupported(const BusesLayout& layouts) const override;
#endif
void processBlock(juce::AudioBuffer<float>&, juce::MidiBuffer&) override;
// Editor
juce::AudioProcessorEditor* createEditor() override;
bool hasEditor() const override;
// Info
const juce::String getName() const override;
bool acceptsMidi() const override;
bool producesMidi() const override;
bool isMidiEffect() const override;
double getTailLengthSeconds() const override;
// Programs
int getNumPrograms() override;
int getCurrentProgram() override;
void setCurrentProgram(int index) override;
const juce::String getProgramName(int index) override;
void changeProgramName(int index, const juce::String& newName) override;
// State
void getStateInformation(juce::MemoryBlock& destData) override;
void setStateInformation(const void* data, int sizeInBytes) override;
// Parameters
void parameterChanged(const juce::String& id, float newValue) override;
void buildParams(std::vector<std::unique_ptr<juce::RangedAudioParameter>>& params,
const std::string& paramGroup);
juce::AudioProcessorValueTreeState::ParameterLayout createParameterLayout();
// Utilities
juce::MidiMessageCollector& getMidiMessageCollector() noexcept { return midiMessageCollector; }
AudioBufferQueue<float>& getAudioBufferQueue() noexcept { return audioBufferQueue; }
AudioBufferQueue<float>& getChorusAudioBufferQueue() noexcept { return chorusBufferQueue; }
AudioBufferQueue<float>& getDelayAudioBufferQueue() noexcept { return delayBufferQueue; }
AudioBufferQueue<float>& getReverbAudioBufferQueue() noexcept { return reverbBufferQueue; }
AudioBufferQueue<float>& getFlangerAudioBufferQueue() noexcept { return flangerBufferQueue; }
AudioBufferQueue<float>& getDistortionAudioBufferQueue() noexcept { return distortionBufferQueue; }
AudioBufferQueue<float>& getFilterAudioBufferQueue() noexcept { return filterBufferQueue; }
// Public members (by JUCE convention)
juce::MidiMessageCollector midiMessageCollector;
juce::AudioProcessorValueTreeState parameters;
private:
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthAudioProcessor)
// ---- IMPORTANT ORDER FIX ----
// Objects are constructed in THIS order. 'sp' must come BEFORE audioEngine.
NeuralSharedParams sp; // <— construct first
NeuralAudioEngine audioEngine; // needs a valid reference to 'sp'
// Meter/scope queues
AudioBufferQueue<float> audioBufferQueue;
AudioBufferQueue<float> chorusBufferQueue;
AudioBufferQueue<float> delayBufferQueue;
AudioBufferQueue<float> reverbBufferQueue;
AudioBufferQueue<float> flangerBufferQueue;
AudioBufferQueue<float> distortionBufferQueue;
AudioBufferQueue<float> filterBufferQueue;
// Scope collector (uses audioBufferQueue, so declare after it)
ScopeDataCollector<float> scopeDataCollector { audioBufferQueue };
};

View File

@@ -1,102 +1,102 @@
#pragma once
#include "AudioBufferQueue.h"
//==============================================================================
template <typename SampleType>
class ScopeComponent : public juce::Component,
private juce::Timer
{
public:
using Queue = AudioBufferQueue<SampleType>;
//==============================================================================
ScopeComponent(Queue& queueToUse)
: audioBufferQueue(queueToUse)
{
sampleData.fill(SampleType(0));
setFramesPerSecond(30);
}
//==============================================================================
void setFramesPerSecond(int framesPerSecond)
{
jassert(framesPerSecond > 0 && framesPerSecond < 1000);
startTimerHz(framesPerSecond);
}
//==============================================================================
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::black);
g.setColour(juce::Colours::white);
auto area = getLocalBounds();
auto h = (SampleType)area.getHeight();
auto w = (SampleType)area.getWidth();
// Oscilloscope
auto scopeRect = juce::Rectangle<SampleType>{ SampleType(0), SampleType(0), w, h / 2 };
plot(sampleData.data(), sampleData.size(), g, scopeRect, SampleType(1), h / 4);
// Spectrum
auto spectrumRect = juce::Rectangle<SampleType>{ SampleType(0), h / 2, w, h / 2 };
plot(spectrumData.data(), spectrumData.size() / 4, g, spectrumRect);
}
//==============================================================================
void resized() override {}
private:
//==============================================================================
Queue& audioBufferQueue;
std::array<SampleType, Queue::bufferSize> sampleData;
juce::dsp::FFT fft{ Queue::order };
using WindowFun = juce::dsp::WindowingFunction<SampleType>;
WindowFun windowFun{ (size_t)fft.getSize(), WindowFun::hann };
std::array<SampleType, 2 * Queue::bufferSize> spectrumData;
//==============================================================================
void timerCallback() override
{
audioBufferQueue.pop(sampleData.data());
juce::FloatVectorOperations::copy(spectrumData.data(), sampleData.data(), (int)sampleData.size());
auto fftSize = (size_t)fft.getSize();
jassert(spectrumData.size() == 2 * fftSize);
windowFun.multiplyWithWindowingTable(spectrumData.data(), fftSize);
fft.performFrequencyOnlyForwardTransform(spectrumData.data());
static constexpr auto mindB = SampleType(-160);
static constexpr auto maxdB = SampleType(0);
for (auto& s : spectrumData)
s = juce::jmap(juce::jlimit(mindB, maxdB, juce::Decibels::gainToDecibels(s) - juce::Decibels::gainToDecibels(SampleType(fftSize))), mindB, maxdB, SampleType(0), SampleType(1));
repaint();
}
//==============================================================================
static void plot(const SampleType* data,
size_t numSamples,
juce::Graphics& g,
juce::Rectangle<SampleType> rect,
SampleType scaler = SampleType(1),
SampleType offset = SampleType(0))
{
auto w = rect.getWidth();
auto h = rect.getHeight();
auto right = rect.getRight();
auto center = rect.getBottom() - offset;
auto gain = h * scaler;
for (size_t i = 1; i < numSamples; ++i)
g.drawLine({ juce::jmap(SampleType(i - 1), SampleType(0), SampleType(numSamples - 1), SampleType(right - w), SampleType(right)),
center - gain * data[i - 1],
juce::jmap(SampleType(i), SampleType(0), SampleType(numSamples - 1), SampleType(right - w), SampleType(right)),
center - gain * data[i] });
}
#pragma once
#include "AudioBufferQueue.h"
//==============================================================================
template <typename SampleType>
class ScopeComponent : public juce::Component,
private juce::Timer
{
public:
using Queue = AudioBufferQueue<SampleType>;
//==============================================================================
ScopeComponent(Queue& queueToUse)
: audioBufferQueue(queueToUse)
{
sampleData.fill(SampleType(0));
setFramesPerSecond(30);
}
//==============================================================================
void setFramesPerSecond(int framesPerSecond)
{
jassert(framesPerSecond > 0 && framesPerSecond < 1000);
startTimerHz(framesPerSecond);
}
//==============================================================================
void paint(juce::Graphics& g) override
{
g.fillAll(juce::Colours::black);
g.setColour(juce::Colours::white);
auto area = getLocalBounds();
auto h = (SampleType)area.getHeight();
auto w = (SampleType)area.getWidth();
// Oscilloscope
auto scopeRect = juce::Rectangle<SampleType>{ SampleType(0), SampleType(0), w, h / 2 };
plot(sampleData.data(), sampleData.size(), g, scopeRect, SampleType(1), h / 4);
// Spectrum
auto spectrumRect = juce::Rectangle<SampleType>{ SampleType(0), h / 2, w, h / 2 };
plot(spectrumData.data(), spectrumData.size() / 4, g, spectrumRect);
}
//==============================================================================
void resized() override {}
private:
//==============================================================================
Queue& audioBufferQueue;
std::array<SampleType, Queue::bufferSize> sampleData;
juce::dsp::FFT fft{ Queue::order };
using WindowFun = juce::dsp::WindowingFunction<SampleType>;
WindowFun windowFun{ (size_t)fft.getSize(), WindowFun::hann };
std::array<SampleType, 2 * Queue::bufferSize> spectrumData;
//==============================================================================
void timerCallback() override
{
audioBufferQueue.pop(sampleData.data());
juce::FloatVectorOperations::copy(spectrumData.data(), sampleData.data(), (int)sampleData.size());
auto fftSize = (size_t)fft.getSize();
jassert(spectrumData.size() == 2 * fftSize);
windowFun.multiplyWithWindowingTable(spectrumData.data(), fftSize);
fft.performFrequencyOnlyForwardTransform(spectrumData.data());
static constexpr auto mindB = SampleType(-160);
static constexpr auto maxdB = SampleType(0);
for (auto& s : spectrumData)
s = juce::jmap(juce::jlimit(mindB, maxdB, juce::Decibels::gainToDecibels(s) - juce::Decibels::gainToDecibels(SampleType(fftSize))), mindB, maxdB, SampleType(0), SampleType(1));
repaint();
}
//==============================================================================
static void plot(const SampleType* data,
size_t numSamples,
juce::Graphics& g,
juce::Rectangle<SampleType> rect,
SampleType scaler = SampleType(1),
SampleType offset = SampleType(0))
{
auto w = rect.getWidth();
auto h = rect.getHeight();
auto right = rect.getRight();
auto center = rect.getBottom() - offset;
auto gain = h * scaler;
for (size_t i = 1; i < numSamples; ++i)
g.drawLine({ juce::jmap(SampleType(i - 1), SampleType(0), SampleType(numSamples - 1), SampleType(right - w), SampleType(right)),
center - gain * data[i - 1],
juce::jmap(SampleType(i), SampleType(0), SampleType(numSamples - 1), SampleType(right - w), SampleType(right)),
center - gain * data[i] });
}
};

View File

@@ -1,62 +1,62 @@
#pragma once
template <typename SampleType>
class ScopeDataCollector
{
public:
//==============================================================================
ScopeDataCollector(AudioBufferQueue<SampleType>& queueToUse)
: audioBufferQueue(queueToUse)
{
}
//==============================================================================
void process(const SampleType* data, size_t numSamples)
{
size_t index = 0;
if (state == State::waitingForTrigger)
{
while (index++ < numSamples)
{
auto currentSample = *data++;
if (currentSample >= triggerLevel && prevSample < triggerLevel)
{
numCollected = 0;
state = State::collecting;
break;
}
prevSample = currentSample;
}
}
if (state == State::collecting)
{
while (index++ < numSamples)
{
buffer[numCollected++] = *data++;
if (numCollected == buffer.size())
{
audioBufferQueue.push(buffer.data(), buffer.size());
state = State::waitingForTrigger;
prevSample = SampleType(100);
break;
}
}
}
}
private:
//==============================================================================
AudioBufferQueue<SampleType>& audioBufferQueue;
std::array<SampleType, AudioBufferQueue<SampleType>::bufferSize> buffer;
size_t numCollected;
SampleType prevSample = SampleType(100);
static constexpr auto triggerLevel = SampleType(0.05);
enum class State { waitingForTrigger, collecting } state{ State::waitingForTrigger };
#pragma once
template <typename SampleType>
class ScopeDataCollector
{
public:
//==============================================================================
ScopeDataCollector(AudioBufferQueue<SampleType>& queueToUse)
: audioBufferQueue(queueToUse)
{
}
//==============================================================================
void process(const SampleType* data, size_t numSamples)
{
size_t index = 0;
if (state == State::waitingForTrigger)
{
while (index++ < numSamples)
{
auto currentSample = *data++;
if (currentSample >= triggerLevel && prevSample < triggerLevel)
{
numCollected = 0;
state = State::collecting;
break;
}
prevSample = currentSample;
}
}
if (state == State::collecting)
{
while (index++ < numSamples)
{
buffer[numCollected++] = *data++;
if (numCollected == buffer.size())
{
audioBufferQueue.push(buffer.data(), buffer.size());
state = State::waitingForTrigger;
prevSample = SampleType(100);
break;
}
}
}
}
private:
//==============================================================================
AudioBufferQueue<SampleType>& audioBufferQueue;
std::array<SampleType, AudioBufferQueue<SampleType>::bufferSize> buffer;
size_t numCollected;
SampleType prevSample = SampleType(100);
static constexpr auto triggerLevel = SampleType(0.05);
enum class State { waitingForTrigger, collecting } state{ State::waitingForTrigger };
};

View File

@@ -1,398 +1,398 @@
#include "SynthVoice.h"
#include <cmath>
//==============================================================================
NeuralSynthVoice::NeuralSynthVoice (NeuralSharedParams& sp)
: shared (sp) {}
//==============================================================================
void NeuralSynthVoice::prepare (const juce::dsp::ProcessSpec& newSpec)
{
spec = newSpec;
// --- Oscillator
osc.prepare (spec.sampleRate);
setWaveform (0); // default to sine
// --- Scratch buffer (IMPORTANT: allocate real memory)
tempBuffer.setSize ((int) spec.numChannels, (int) spec.maximumBlockSize,
false, false, true);
tempBlock = juce::dsp::AudioBlock<float> (tempBuffer);
// --- Prepare chain elements
chain.prepare (spec);
// Set maximum delay sizes BEFORE runtime changes
{
// Flanger: up to 20 ms
auto& flanger = chain.get<flangerIndex>();
const size_t maxFlangerDelay = (size_t) juce::jmax<size_t>(
1, (size_t) std::ceil (0.020 * spec.sampleRate));
flanger.setMaximumDelayInSamples (maxFlangerDelay);
flanger.reset();
}
{
// Simple delay: up to 2 s
auto& delay = chain.get<delayIndex>();
const size_t maxDelay = (size_t) juce::jmax<size_t>(
1, (size_t) std::ceil (2.0 * spec.sampleRate));
delay.setMaximumDelayInSamples (maxDelay);
delay.reset();
}
// Envelopes
adsr.setSampleRate (spec.sampleRate);
filterAdsr.setSampleRate (spec.sampleRate);
// Filter
svf.reset();
svf.prepare (spec);
// Initial filter type
const int type = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.filterType ? shared.filterType->load() : 0.0f));
switch (type)
{
case 0: svf.setType (juce::dsp::StateVariableTPTFilterType::lowpass); break;
case 1: svf.setType (juce::dsp::StateVariableTPTFilterType::highpass); break;
case 2: svf.setType (juce::dsp::StateVariableTPTFilterType::bandpass); break;
default: break;
}
}
//==============================================================================
void NeuralSynthVoice::renderNextBlock (juce::AudioBuffer<float>& outputBuffer,
int startSample, int numSamples)
{
if (numSamples <= 0)
return;
if (! adsr.isActive())
clearCurrentNote();
// Apply pending waveform change (from GUI / processor thread)
const int wf = pendingWaveform.exchange (-1, std::memory_order_acq_rel);
if (wf != -1)
setWaveform (wf);
// --- Generate oscillator into temp buffer
tempBuffer.clear();
const int numCh = juce::jmin ((int) spec.numChannels, tempBuffer.getNumChannels());
for (int i = 0; i < numSamples; ++i)
{
const float s = osc.process();
for (int ch = 0; ch < numCh; ++ch)
tempBuffer.getWritePointer (ch)[i] = s;
}
auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
// ================================================================
// Flanger (pre-filter) manual per-sample to set varying delay
// ================================================================
{
auto& flanger = chain.get<flangerIndex>();
const bool enabled = shared.flangerOn && shared.flangerOn->load() > 0.5f;
if (enabled)
{
const float rate = shared.flangerRate ? shared.flangerRate->load() : 0.0f;
float lfoPhase = shared.flangerPhase ? shared.flangerPhase->load() : 0.0f;
const float flangerDepth = shared.flangerDepth ? shared.flangerDepth->load() : 0.0f; // ms
const float mix = shared.flangerDryMix ? shared.flangerDryMix->load() : 0.0f;
const float feedback = shared.flangerFeedback ? shared.flangerFeedback->load() : 0.0f;
const float baseDelayMs = shared.flangerDelay ? shared.flangerDelay->load() : 0.25f;
for (int i = 0; i < numSamples; ++i)
{
const float in = tempBuffer.getReadPointer (0)[i];
const float lfo = std::sin (lfoPhase);
const float delayMs = baseDelayMs + 0.5f * (1.0f + lfo) * flangerDepth;
const float delaySamples = juce::jmax (0.0f, delayMs * 0.001f * (float) spec.sampleRate);
flanger.setDelay (delaySamples);
const float delayed = flanger.popSample (0);
flanger.pushSample (0, in + delayed * feedback);
const float out = in * (1.0f - mix) + delayed * mix;
for (int ch = 0; ch < numCh; ++ch)
tempBuffer.getWritePointer (ch)[i] = out;
lfoPhase += juce::MathConstants<float>::twoPi * rate / (float) spec.sampleRate;
if (lfoPhase > juce::MathConstants<float>::twoPi)
lfoPhase -= juce::MathConstants<float>::twoPi;
}
}
}
// ================================================================
// Filter with per-sample ADSR modulation (poly)
// ================================================================
{
const bool enabled = shared.filterOn && shared.filterOn->load() > 0.5f;
// Update filter type every block (cheap)
const int ftype = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.filterType ? shared.filterType->load() : 0.0f));
switch (ftype)
{
case 0: svf.setType (juce::dsp::StateVariableTPTFilterType::lowpass); break;
case 1: svf.setType (juce::dsp::StateVariableTPTFilterType::highpass); break;
case 2: svf.setType (juce::dsp::StateVariableTPTFilterType::bandpass); break;
default: break;
}
const float qOrRes = juce::jlimit (0.1f, 10.0f,
shared.filterResonance ? shared.filterResonance->load() : 0.7f);
svf.setResonance (qOrRes);
const float baseCutoff = juce::jlimit (20.0f, 20000.0f,
shared.filterCutoff ? shared.filterCutoff->load() : 1000.0f);
const float envAmt = shared.fenvAmount ? shared.fenvAmount->load() : 0.0f;
for (int i = 0; i < numSamples; ++i)
{
const float envVal = filterAdsr.getNextSample();
const float cutoff = juce::jlimit (20.0f, 20000.0f,
baseCutoff * std::pow (2.0f, envAmt * envVal));
svf.setCutoffFrequency (cutoff);
if (enabled)
{
for (int ch = 0; ch < numCh; ++ch)
{
float x = tempBuffer.getSample (ch, i);
x = svf.processSample (ch, x);
tempBuffer.setSample (ch, i, x);
}
}
}
}
// ================================================================
// Chorus
// ================================================================
if (shared.chorusOn && shared.chorusOn->load() > 0.5f)
{
auto& chorus = chain.get<chorusIndex>();
if (shared.chorusCentre) chorus.setCentreDelay (shared.chorusCentre->load());
if (shared.chorusDepth) chorus.setDepth (shared.chorusDepth->load());
if (shared.chorusFeedback) chorus.setFeedback (shared.chorusFeedback->load());
if (shared.chorusMix) chorus.setMix (shared.chorusMix->load());
if (shared.chorusRate) chorus.setRate (shared.chorusRate->load());
chain.get<chorusIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Simple Delay (per-voice)
// ================================================================
if (shared.delayOn && shared.delayOn->load() > 0.5f)
{
auto& delay = chain.get<delayIndex>();
const float time = shared.delayTime ? shared.delayTime->load() : 0.1f;
delay.setDelay (juce::jmax (0.0f, time * (float) spec.sampleRate));
delay.process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Reverb
// ================================================================
if (shared.reverbOn && shared.reverbOn->load() > 0.5f)
{
juce::Reverb::Parameters rp;
rp.damping = shared.reverbDamping ? shared.reverbDamping->load() : 0.0f;
rp.dryLevel = shared.reverbDryLevel ? shared.reverbDryLevel->load() : 0.0f;
rp.freezeMode = shared.reverbFreezeMode ? shared.reverbFreezeMode->load() : 0.0f;
rp.roomSize = shared.reverbRoomSize ? shared.reverbRoomSize->load() : 0.0f;
rp.wetLevel = shared.reverbWetLevel ? shared.reverbWetLevel->load() : 0.0f;
rp.width = shared.reverbWidth ? shared.reverbWidth->load() : 0.0f;
chain.get<reverbIndex>().setParameters (rp);
chain.get<reverbIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Distortion + tone (post LPF/Peak)
// ================================================================
{
const float driveDb = shared.distortionDrive ? shared.distortionDrive->load() : 0.0f;
const float bias = juce::jlimit (-1.0f, 1.0f, shared.distortionBias ? shared.distortionBias->load() : 0.0f);
const float toneHz = juce::jlimit (100.0f, 8000.0f, shared.distortionTone ? shared.distortionTone->load() : 3000.0f);
const int shape = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.distortionShape ? shared.distortionShape->load() : 0.0f));
const float mix = shared.distortionMix ? shared.distortionMix->load() : 0.0f;
auto& pre = chain.get<distortionPreGain>();
auto& sh = chain.get<distortionIndex>();
auto& tone = chain.get<distortionPostLPF>();
pre.setGainDecibels (driveDb);
// Explicit std::function target (works on MSVC)
if (shape == 0) sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return std::tanh (x + bias); } };
else if (shape == 1) sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return juce::jlimit (-1.0f, 1.0f, x + bias); } };
else sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return std::atan (x + bias) * (2.0f / juce::MathConstants<float>::pi); } };
tone.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, toneHz, 0.707f,
juce::Decibels::decibelsToGain (shared.highGainDbls ? shared.highGainDbls->load() : 0.0f));
if (shared.distortionOn && shared.distortionOn->load() > 0.5f)
{
// Wet/dry blend around the shaper
juce::AudioBuffer<float> dryCopy (tempBuffer.getNumChannels(), numSamples);
for (int ch = 0; ch < numCh; ++ch)
dryCopy.copyFrom (ch, 0, tempBuffer, ch, 0, numSamples);
// pre -> shaper -> tone
pre.process (juce::dsp::ProcessContextReplacing<float> (block));
sh.process (juce::dsp::ProcessContextReplacing<float> (block));
tone.process (juce::dsp::ProcessContextReplacing<float> (block));
const float wet = mix, dry = 1.0f - mix;
for (int ch = 0; ch < numCh; ++ch)
{
auto* d = dryCopy.getReadPointer (ch);
auto* w = tempBuffer.getWritePointer (ch);
for (int i = 0; i < numSamples; ++i)
w[i] = dry * d[i] + wet * w[i];
}
}
}
// ================================================================
// EQ + Master + Limiter (EQ guarded by eqOn)
// ================================================================
{
const bool eqEnabled = shared.eqOn && shared.eqOn->load() > 0.5f;
auto& eqL = chain.get<eqLowIndex>();
auto& eqM = chain.get<eqMidIndex>();
auto& eqH = chain.get<eqHighIndex>();
if (eqEnabled)
{
eqL.coefficients = juce::dsp::IIR::Coefficients<float>::makeLowShelf (
spec.sampleRate, 100.0f, 0.707f,
juce::Decibels::decibelsToGain (shared.lowGainDbls ? shared.lowGainDbls->load() : 0.0f));
eqM.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, 1000.0f, 1.0f,
juce::Decibels::decibelsToGain (shared.midGainDbls ? shared.midGainDbls->load() : 0.0f));
eqH.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, 10000.0f, 0.707f,
juce::Decibels::decibelsToGain (shared.highGainDbls ? shared.highGainDbls->load() : 0.0f));
eqL.process (juce::dsp::ProcessContextReplacing<float> (block));
eqM.process (juce::dsp::ProcessContextReplacing<float> (block));
eqH.process (juce::dsp::ProcessContextReplacing<float> (block));
}
chain.get<masterIndex>().setGainDecibels (shared.masterDbls ? shared.masterDbls->load() : 0.0f);
chain.get<masterIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
chain.get<limiterIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Apply AMP ADSR envelope
// ================================================================
{
juce::AudioBuffer<float> buf (tempBuffer.getArrayOfWritePointers(), numCh, numSamples);
adsr.applyEnvelopeToBuffer (buf, 0, numSamples);
}
// Mix into output
juce::dsp::AudioBlock<float> (outputBuffer)
.getSubBlock ((size_t) startSample, (size_t) numSamples)
.add (block);
}
//==============================================================================
void NeuralSynthVoice::noteStarted()
{
const float freqHz = (float) getCurrentlyPlayingNote().getFrequencyInHertz();
// Oscillator frequency and phase retrigger
osc.setFrequency (freqHz);
osc.resetPhase (0.0f);
// Chorus snapshot
if (shared.chorusCentre) chain.get<chorusIndex>().setCentreDelay (shared.chorusCentre->load());
if (shared.chorusDepth) chain.get<chorusIndex>().setDepth (shared.chorusDepth->load());
if (shared.chorusFeedback) chain.get<chorusIndex>().setFeedback (shared.chorusFeedback->load());
if (shared.chorusMix) chain.get<chorusIndex>().setMix (shared.chorusMix->load());
if (shared.chorusRate) chain.get<chorusIndex>().setRate (shared.chorusRate->load());
// Delay time (in samples)
if (shared.delayTime)
chain.get<delayIndex>().setDelay (juce::jmax (0.0f, shared.delayTime->load() * (float) spec.sampleRate));
// Reverb snapshot
juce::Reverb::Parameters rp;
rp.damping = shared.reverbDamping ? shared.reverbDamping->load() : 0.0f;
rp.dryLevel = shared.reverbDryLevel ? shared.reverbDryLevel->load() : 0.0f;
rp.freezeMode = shared.reverbFreezeMode ? shared.reverbFreezeMode->load() : 0.0f;
rp.roomSize = shared.reverbRoomSize ? shared.reverbRoomSize->load() : 0.0f;
rp.wetLevel = shared.reverbWetLevel ? shared.reverbWetLevel->load() : 0.0f;
rp.width = shared.reverbWidth ? shared.reverbWidth->load() : 0.0f;
chain.get<reverbIndex>().setParameters (rp);
// Amp ADSR
juce::ADSR::Parameters ap;
ap.attack = shared.adsrAttack ? shared.adsrAttack->load() : 0.01f;
ap.decay = shared.adsrDecay ? shared.adsrDecay->load() : 0.10f;
ap.sustain = shared.adsrSustain ? shared.adsrSustain->load() : 0.80f;
ap.release = shared.adsrRelease ? shared.adsrRelease->load() : 0.40f;
adsr.setParameters (ap);
adsr.noteOn();
// Filter ADSR
juce::ADSR::Parameters fp;
fp.attack = shared.fenvAttack ? shared.fenvAttack->load() : 0.01f;
fp.decay = shared.fenvDecay ? shared.fenvDecay->load() : 0.10f;
fp.sustain = shared.fenvSustain ? shared.fenvSustain->load() : 0.80f;
fp.release = shared.fenvRelease ? shared.fenvRelease->load() : 0.40f;
filterAdsr.setParameters (fp);
filterAdsr.noteOn();
}
//==============================================================================
void NeuralSynthVoice::notePitchbendChanged()
{
const float freqHz = (float) getCurrentlyPlayingNote().getFrequencyInHertz();
osc.setFrequency (freqHz);
}
//==============================================================================
void NeuralSynthVoice::noteStopped (bool allowTailOff)
{
juce::ignoreUnused (allowTailOff);
adsr.noteOff();
filterAdsr.noteOff();
}
//==============================================================================
void NeuralSynthVoice::setWaveform (int waveformType)
{
switch (juce::jlimit (0, 3, waveformType))
{
case 0: osc.setWave (BlepWave::Sine); break;
case 1: osc.setWave (BlepWave::Saw); break;
case 2: osc.setWave (BlepWave::Square); break;
case 3: osc.setWave (BlepWave::Triangle); break;
default: osc.setWave (BlepWave::Sine); break;
}
}
#include "SynthVoice.h"
#include <cmath>
//==============================================================================
NeuralSynthVoice::NeuralSynthVoice (NeuralSharedParams& sp)
: shared (sp) {}
//==============================================================================
void NeuralSynthVoice::prepare (const juce::dsp::ProcessSpec& newSpec)
{
spec = newSpec;
// --- Oscillator
osc.prepare (spec.sampleRate);
setWaveform (0); // default to sine
// --- Scratch buffer (IMPORTANT: allocate real memory)
tempBuffer.setSize ((int) spec.numChannels, (int) spec.maximumBlockSize,
false, false, true);
tempBlock = juce::dsp::AudioBlock<float> (tempBuffer);
// --- Prepare chain elements
chain.prepare (spec);
// Set maximum delay sizes BEFORE runtime changes
{
// Flanger: up to 20 ms
auto& flanger = chain.get<flangerIndex>();
const size_t maxFlangerDelay = (size_t) juce::jmax<size_t>(
1, (size_t) std::ceil (0.020 * spec.sampleRate));
flanger.setMaximumDelayInSamples (maxFlangerDelay);
flanger.reset();
}
{
// Simple delay: up to 2 s
auto& delay = chain.get<delayIndex>();
const size_t maxDelay = (size_t) juce::jmax<size_t>(
1, (size_t) std::ceil (2.0 * spec.sampleRate));
delay.setMaximumDelayInSamples (maxDelay);
delay.reset();
}
// Envelopes
adsr.setSampleRate (spec.sampleRate);
filterAdsr.setSampleRate (spec.sampleRate);
// Filter
svf.reset();
svf.prepare (spec);
// Initial filter type
const int type = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.filterType ? shared.filterType->load() : 0.0f));
switch (type)
{
case 0: svf.setType (juce::dsp::StateVariableTPTFilterType::lowpass); break;
case 1: svf.setType (juce::dsp::StateVariableTPTFilterType::highpass); break;
case 2: svf.setType (juce::dsp::StateVariableTPTFilterType::bandpass); break;
default: break;
}
}
//==============================================================================
void NeuralSynthVoice::renderNextBlock (juce::AudioBuffer<float>& outputBuffer,
int startSample, int numSamples)
{
if (numSamples <= 0)
return;
if (! adsr.isActive())
clearCurrentNote();
// Apply pending waveform change (from GUI / processor thread)
const int wf = pendingWaveform.exchange (-1, std::memory_order_acq_rel);
if (wf != -1)
setWaveform (wf);
// --- Generate oscillator into temp buffer
tempBuffer.clear();
const int numCh = juce::jmin ((int) spec.numChannels, tempBuffer.getNumChannels());
for (int i = 0; i < numSamples; ++i)
{
const float s = osc.process();
for (int ch = 0; ch < numCh; ++ch)
tempBuffer.getWritePointer (ch)[i] = s;
}
auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
// ================================================================
// Flanger (pre-filter) manual per-sample to set varying delay
// ================================================================
{
auto& flanger = chain.get<flangerIndex>();
const bool enabled = shared.flangerOn && shared.flangerOn->load() > 0.5f;
if (enabled)
{
const float rate = shared.flangerRate ? shared.flangerRate->load() : 0.0f;
float lfoPhase = shared.flangerPhase ? shared.flangerPhase->load() : 0.0f;
const float flangerDepth = shared.flangerDepth ? shared.flangerDepth->load() : 0.0f; // ms
const float mix = shared.flangerDryMix ? shared.flangerDryMix->load() : 0.0f;
const float feedback = shared.flangerFeedback ? shared.flangerFeedback->load() : 0.0f;
const float baseDelayMs = shared.flangerDelay ? shared.flangerDelay->load() : 0.25f;
for (int i = 0; i < numSamples; ++i)
{
const float in = tempBuffer.getReadPointer (0)[i];
const float lfo = std::sin (lfoPhase);
const float delayMs = baseDelayMs + 0.5f * (1.0f + lfo) * flangerDepth;
const float delaySamples = juce::jmax (0.0f, delayMs * 0.001f * (float) spec.sampleRate);
flanger.setDelay (delaySamples);
const float delayed = flanger.popSample (0);
flanger.pushSample (0, in + delayed * feedback);
const float out = in * (1.0f - mix) + delayed * mix;
for (int ch = 0; ch < numCh; ++ch)
tempBuffer.getWritePointer (ch)[i] = out;
lfoPhase += juce::MathConstants<float>::twoPi * rate / (float) spec.sampleRate;
if (lfoPhase > juce::MathConstants<float>::twoPi)
lfoPhase -= juce::MathConstants<float>::twoPi;
}
}
}
// ================================================================
// Filter with per-sample ADSR modulation (poly)
// ================================================================
{
const bool enabled = shared.filterOn && shared.filterOn->load() > 0.5f;
// Update filter type every block (cheap)
const int ftype = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.filterType ? shared.filterType->load() : 0.0f));
switch (ftype)
{
case 0: svf.setType (juce::dsp::StateVariableTPTFilterType::lowpass); break;
case 1: svf.setType (juce::dsp::StateVariableTPTFilterType::highpass); break;
case 2: svf.setType (juce::dsp::StateVariableTPTFilterType::bandpass); break;
default: break;
}
const float qOrRes = juce::jlimit (0.1f, 10.0f,
shared.filterResonance ? shared.filterResonance->load() : 0.7f);
svf.setResonance (qOrRes);
const float baseCutoff = juce::jlimit (20.0f, 20000.0f,
shared.filterCutoff ? shared.filterCutoff->load() : 1000.0f);
const float envAmt = shared.fenvAmount ? shared.fenvAmount->load() : 0.0f;
for (int i = 0; i < numSamples; ++i)
{
const float envVal = filterAdsr.getNextSample();
const float cutoff = juce::jlimit (20.0f, 20000.0f,
baseCutoff * std::pow (2.0f, envAmt * envVal));
svf.setCutoffFrequency (cutoff);
if (enabled)
{
for (int ch = 0; ch < numCh; ++ch)
{
float x = tempBuffer.getSample (ch, i);
x = svf.processSample (ch, x);
tempBuffer.setSample (ch, i, x);
}
}
}
}
// ================================================================
// Chorus
// ================================================================
if (shared.chorusOn && shared.chorusOn->load() > 0.5f)
{
auto& chorus = chain.get<chorusIndex>();
if (shared.chorusCentre) chorus.setCentreDelay (shared.chorusCentre->load());
if (shared.chorusDepth) chorus.setDepth (shared.chorusDepth->load());
if (shared.chorusFeedback) chorus.setFeedback (shared.chorusFeedback->load());
if (shared.chorusMix) chorus.setMix (shared.chorusMix->load());
if (shared.chorusRate) chorus.setRate (shared.chorusRate->load());
chain.get<chorusIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Simple Delay (per-voice)
// ================================================================
if (shared.delayOn && shared.delayOn->load() > 0.5f)
{
auto& delay = chain.get<delayIndex>();
const float time = shared.delayTime ? shared.delayTime->load() : 0.1f;
delay.setDelay (juce::jmax (0.0f, time * (float) spec.sampleRate));
delay.process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Reverb
// ================================================================
if (shared.reverbOn && shared.reverbOn->load() > 0.5f)
{
juce::Reverb::Parameters rp;
rp.damping = shared.reverbDamping ? shared.reverbDamping->load() : 0.0f;
rp.dryLevel = shared.reverbDryLevel ? shared.reverbDryLevel->load() : 0.0f;
rp.freezeMode = shared.reverbFreezeMode ? shared.reverbFreezeMode->load() : 0.0f;
rp.roomSize = shared.reverbRoomSize ? shared.reverbRoomSize->load() : 0.0f;
rp.wetLevel = shared.reverbWetLevel ? shared.reverbWetLevel->load() : 0.0f;
rp.width = shared.reverbWidth ? shared.reverbWidth->load() : 0.0f;
chain.get<reverbIndex>().setParameters (rp);
chain.get<reverbIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Distortion + tone (post LPF/Peak)
// ================================================================
{
const float driveDb = shared.distortionDrive ? shared.distortionDrive->load() : 0.0f;
const float bias = juce::jlimit (-1.0f, 1.0f, shared.distortionBias ? shared.distortionBias->load() : 0.0f);
const float toneHz = juce::jlimit (100.0f, 8000.0f, shared.distortionTone ? shared.distortionTone->load() : 3000.0f);
const int shape = (int) std::lround (juce::jlimit (0.0f, 2.0f,
shared.distortionShape ? shared.distortionShape->load() : 0.0f));
const float mix = shared.distortionMix ? shared.distortionMix->load() : 0.0f;
auto& pre = chain.get<distortionPreGain>();
auto& sh = chain.get<distortionIndex>();
auto& tone = chain.get<distortionPostLPF>();
pre.setGainDecibels (driveDb);
// Explicit std::function target (works on MSVC)
if (shape == 0) sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return std::tanh (x + bias); } };
else if (shape == 1) sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return juce::jlimit (-1.0f, 1.0f, x + bias); } };
else sh.functionToUse = std::function<float(float)>{ [bias](float x) noexcept { return std::atan (x + bias) * (2.0f / juce::MathConstants<float>::pi); } };
tone.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, toneHz, 0.707f,
juce::Decibels::decibelsToGain (shared.highGainDbls ? shared.highGainDbls->load() : 0.0f));
if (shared.distortionOn && shared.distortionOn->load() > 0.5f)
{
// Wet/dry blend around the shaper
juce::AudioBuffer<float> dryCopy (tempBuffer.getNumChannels(), numSamples);
for (int ch = 0; ch < numCh; ++ch)
dryCopy.copyFrom (ch, 0, tempBuffer, ch, 0, numSamples);
// pre -> shaper -> tone
pre.process (juce::dsp::ProcessContextReplacing<float> (block));
sh.process (juce::dsp::ProcessContextReplacing<float> (block));
tone.process (juce::dsp::ProcessContextReplacing<float> (block));
const float wet = mix, dry = 1.0f - mix;
for (int ch = 0; ch < numCh; ++ch)
{
auto* d = dryCopy.getReadPointer (ch);
auto* w = tempBuffer.getWritePointer (ch);
for (int i = 0; i < numSamples; ++i)
w[i] = dry * d[i] + wet * w[i];
}
}
}
// ================================================================
// EQ + Master + Limiter (EQ guarded by eqOn)
// ================================================================
{
const bool eqEnabled = shared.eqOn && shared.eqOn->load() > 0.5f;
auto& eqL = chain.get<eqLowIndex>();
auto& eqM = chain.get<eqMidIndex>();
auto& eqH = chain.get<eqHighIndex>();
if (eqEnabled)
{
eqL.coefficients = juce::dsp::IIR::Coefficients<float>::makeLowShelf (
spec.sampleRate, 100.0f, 0.707f,
juce::Decibels::decibelsToGain (shared.lowGainDbls ? shared.lowGainDbls->load() : 0.0f));
eqM.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, 1000.0f, 1.0f,
juce::Decibels::decibelsToGain (shared.midGainDbls ? shared.midGainDbls->load() : 0.0f));
eqH.coefficients = juce::dsp::IIR::Coefficients<float>::makePeakFilter (
spec.sampleRate, 10000.0f, 0.707f,
juce::Decibels::decibelsToGain (shared.highGainDbls ? shared.highGainDbls->load() : 0.0f));
eqL.process (juce::dsp::ProcessContextReplacing<float> (block));
eqM.process (juce::dsp::ProcessContextReplacing<float> (block));
eqH.process (juce::dsp::ProcessContextReplacing<float> (block));
}
chain.get<masterIndex>().setGainDecibels (shared.masterDbls ? shared.masterDbls->load() : 0.0f);
chain.get<masterIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
chain.get<limiterIndex>().process (juce::dsp::ProcessContextReplacing<float> (block));
}
// ================================================================
// Apply AMP ADSR envelope
// ================================================================
{
juce::AudioBuffer<float> buf (tempBuffer.getArrayOfWritePointers(), numCh, numSamples);
adsr.applyEnvelopeToBuffer (buf, 0, numSamples);
}
// Mix into output
juce::dsp::AudioBlock<float> (outputBuffer)
.getSubBlock ((size_t) startSample, (size_t) numSamples)
.add (block);
}
//==============================================================================
void NeuralSynthVoice::noteStarted()
{
const float freqHz = (float) getCurrentlyPlayingNote().getFrequencyInHertz();
// Oscillator frequency and phase retrigger
osc.setFrequency (freqHz);
osc.resetPhase (0.0f);
// Chorus snapshot
if (shared.chorusCentre) chain.get<chorusIndex>().setCentreDelay (shared.chorusCentre->load());
if (shared.chorusDepth) chain.get<chorusIndex>().setDepth (shared.chorusDepth->load());
if (shared.chorusFeedback) chain.get<chorusIndex>().setFeedback (shared.chorusFeedback->load());
if (shared.chorusMix) chain.get<chorusIndex>().setMix (shared.chorusMix->load());
if (shared.chorusRate) chain.get<chorusIndex>().setRate (shared.chorusRate->load());
// Delay time (in samples)
if (shared.delayTime)
chain.get<delayIndex>().setDelay (juce::jmax (0.0f, shared.delayTime->load() * (float) spec.sampleRate));
// Reverb snapshot
juce::Reverb::Parameters rp;
rp.damping = shared.reverbDamping ? shared.reverbDamping->load() : 0.0f;
rp.dryLevel = shared.reverbDryLevel ? shared.reverbDryLevel->load() : 0.0f;
rp.freezeMode = shared.reverbFreezeMode ? shared.reverbFreezeMode->load() : 0.0f;
rp.roomSize = shared.reverbRoomSize ? shared.reverbRoomSize->load() : 0.0f;
rp.wetLevel = shared.reverbWetLevel ? shared.reverbWetLevel->load() : 0.0f;
rp.width = shared.reverbWidth ? shared.reverbWidth->load() : 0.0f;
chain.get<reverbIndex>().setParameters (rp);
// Amp ADSR
juce::ADSR::Parameters ap;
ap.attack = shared.adsrAttack ? shared.adsrAttack->load() : 0.01f;
ap.decay = shared.adsrDecay ? shared.adsrDecay->load() : 0.10f;
ap.sustain = shared.adsrSustain ? shared.adsrSustain->load() : 0.80f;
ap.release = shared.adsrRelease ? shared.adsrRelease->load() : 0.40f;
adsr.setParameters (ap);
adsr.noteOn();
// Filter ADSR
juce::ADSR::Parameters fp;
fp.attack = shared.fenvAttack ? shared.fenvAttack->load() : 0.01f;
fp.decay = shared.fenvDecay ? shared.fenvDecay->load() : 0.10f;
fp.sustain = shared.fenvSustain ? shared.fenvSustain->load() : 0.80f;
fp.release = shared.fenvRelease ? shared.fenvRelease->load() : 0.40f;
filterAdsr.setParameters (fp);
filterAdsr.noteOn();
}
//==============================================================================
void NeuralSynthVoice::notePitchbendChanged()
{
const float freqHz = (float) getCurrentlyPlayingNote().getFrequencyInHertz();
osc.setFrequency (freqHz);
}
//==============================================================================
void NeuralSynthVoice::noteStopped (bool allowTailOff)
{
juce::ignoreUnused (allowTailOff);
adsr.noteOff();
filterAdsr.noteOff();
}
//==============================================================================
void NeuralSynthVoice::setWaveform (int waveformType)
{
switch (juce::jlimit (0, 3, waveformType))
{
case 0: osc.setWave (BlepWave::Sine); break;
case 1: osc.setWave (BlepWave::Saw); break;
case 2: osc.setWave (BlepWave::Square); break;
case 3: osc.setWave (BlepWave::Triangle); break;
default: osc.setWave (BlepWave::Sine); break;
}
}

View File

@@ -1,97 +1,97 @@
#pragma once
#include <JuceHeader.h>
#include <functional> // <-- for std::function used by WaveShaper
#include "NeuralSharedParams.h"
#include "BlepOsc.h"
//==============================================================================
// A single polyBLEP oscillator voice with per-voice ADSR, filter ADSR,
// flanger (delayline), simple delay, chorus, reverb, distortion, EQ, master.
class NeuralSynthVoice : public juce::MPESynthesiserVoice
{
public:
explicit NeuralSynthVoice (NeuralSharedParams& sharedParams);
// JUCE voice API
void prepare (const juce::dsp::ProcessSpec& spec);
void renderNextBlock (juce::AudioBuffer<float>& outputBuffer,
int startSample, int numSamples) override;
void noteStarted() override;
void noteStopped (bool allowTailOff) override;
void notePitchbendChanged() override;
void notePressureChanged() override {}
void noteTimbreChanged() override {}
void noteKeyStateChanged() override {}
// Called from the processor when the GUI waveform param changes
void changeWaveform (int wf) { setWaveform (wf); }
private:
void setWaveform (int waveformType);
//=== Processing chain (without oscillator) ===============================
using DelayLine = juce::dsp::DelayLine<float,
juce::dsp::DelayLineInterpolationTypes::Linear>;
using IIR = juce::dsp::IIR::Filter<float>;
using Gain = juce::dsp::Gain<float>;
using WaveShaper = juce::dsp::WaveShaper<float, std::function<float(float)>>; // <-- fix
using Chorus = juce::dsp::Chorus<float>;
using Reverb = juce::dsp::Reverb;
using Limiter = juce::dsp::Limiter<float>;
enum ChainIndex
{
flangerIndex = 0,
delayIndex,
chorusIndex,
reverbIndex,
distortionPreGain,
distortionIndex,
distortionPostLPF,
eqLowIndex,
eqMidIndex,
eqHighIndex,
masterIndex,
limiterIndex
};
using Chain = juce::dsp::ProcessorChain<
DelayLine, // flanger
DelayLine, // simple delay
Chorus, // chorus
Reverb, // reverb
Gain, // distortion pre-gain (drive)
WaveShaper, // distortion waveshaper
IIR, // tone / post-EQ for distortion
IIR, // EQ low
IIR, // EQ mid
IIR, // EQ high
Gain, // master gain
Limiter // safety limiter
>;
private:
NeuralSharedParams& shared;
juce::dsp::ProcessSpec spec {};
// ==== Oscillator (polyBLEP) ============================================
BlepOsc osc;
std::atomic<int> pendingWaveform {-1}; // set by changeWaveform()
// ==== Envelopes & Filter ===============================================
juce::ADSR adsr;
juce::ADSR filterAdsr;
juce::dsp::StateVariableTPTFilter<float> svf;
// ==== Chain (FX, EQ, master, limiter) ==================================
Chain chain;
// ==== Scratch buffer (properly allocated) ===============================
juce::AudioBuffer<float> tempBuffer;
juce::dsp::AudioBlock<float> tempBlock;
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthVoice)
};
#pragma once
#include <JuceHeader.h>
#include <functional> // <-- for std::function used by WaveShaper
#include "NeuralSharedParams.h"
#include "BlepOsc.h"
//==============================================================================
// A single polyBLEP oscillator voice with per-voice ADSR, filter ADSR,
// flanger (delayline), simple delay, chorus, reverb, distortion, EQ, master.
class NeuralSynthVoice : public juce::MPESynthesiserVoice
{
public:
explicit NeuralSynthVoice (NeuralSharedParams& sharedParams);
// JUCE voice API
void prepare (const juce::dsp::ProcessSpec& spec);
void renderNextBlock (juce::AudioBuffer<float>& outputBuffer,
int startSample, int numSamples) override;
void noteStarted() override;
void noteStopped (bool allowTailOff) override;
void notePitchbendChanged() override;
void notePressureChanged() override {}
void noteTimbreChanged() override {}
void noteKeyStateChanged() override {}
// Called from the processor when the GUI waveform param changes
void changeWaveform (int wf) { setWaveform (wf); }
private:
void setWaveform (int waveformType);
//=== Processing chain (without oscillator) ===============================
using DelayLine = juce::dsp::DelayLine<float,
juce::dsp::DelayLineInterpolationTypes::Linear>;
using IIR = juce::dsp::IIR::Filter<float>;
using Gain = juce::dsp::Gain<float>;
using WaveShaper = juce::dsp::WaveShaper<float, std::function<float(float)>>; // <-- fix
using Chorus = juce::dsp::Chorus<float>;
using Reverb = juce::dsp::Reverb;
using Limiter = juce::dsp::Limiter<float>;
enum ChainIndex
{
flangerIndex = 0,
delayIndex,
chorusIndex,
reverbIndex,
distortionPreGain,
distortionIndex,
distortionPostLPF,
eqLowIndex,
eqMidIndex,
eqHighIndex,
masterIndex,
limiterIndex
};
using Chain = juce::dsp::ProcessorChain<
DelayLine, // flanger
DelayLine, // simple delay
Chorus, // chorus
Reverb, // reverb
Gain, // distortion pre-gain (drive)
WaveShaper, // distortion waveshaper
IIR, // tone / post-EQ for distortion
IIR, // EQ low
IIR, // EQ mid
IIR, // EQ high
Gain, // master gain
Limiter // safety limiter
>;
private:
NeuralSharedParams& shared;
juce::dsp::ProcessSpec spec {};
// ==== Oscillator (polyBLEP) ============================================
BlepOsc osc;
std::atomic<int> pendingWaveform {-1}; // set by changeWaveform()
// ==== Envelopes & Filter ===============================================
juce::ADSR adsr;
juce::ADSR filterAdsr;
juce::dsp::StateVariableTPTFilter<float> svf;
// ==== Chain (FX, EQ, master, limiter) ==================================
Chain chain;
// ==== Scratch buffer (properly allocated) ===============================
juce::AudioBuffer<float> tempBuffer;
juce::dsp::AudioBlock<float> tempBlock;
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NeuralSynthVoice)
};

View File

@@ -1,261 +1,261 @@
#pragma once
#include <JuceHeader.h>
#include <vector>
#include <cmath>
// ============================== Design =======================================
// - Bank with F frames, each frame is a single-cycle table of N samples.
// - For each frame, we create L mip-levels: level 0 = full bandwidth,
// level l halves the permitted harmonics (spectral truncation).
// - Runtime chooses level from note frequency and sampleRate, then morphs
// between adjacent frames and crossfades between the two nearest levels.
// - Table read uses linear interpolation (cheap and good enough with N>=2048).
namespace WT
{
// Utility: complex array wrapper for JUCE FFT (interleaved real/imag floats)
struct ComplexBuf
{
std::vector<float> data; // size = 2 * N
explicit ComplexBuf(size_t N = 0) { resize(N); }
void resize(size_t N) { data.assign(2 * N, 0.0f); }
juce::dsp::Complex<float>* asComplex() { return reinterpret_cast<juce::dsp::Complex<float>*>(data.data()); }
};
// =======================================================================
// WavetableBank: holds raw frames + mipmapped versions
// =======================================================================
class Bank
{
public:
// N = table length (must be power-of-two for FFT), frames = number of morph frames
// mipLevels = how many spectral levels (>=1). 5 ~ 6 is plenty for synth use.
Bank(size_t N = 2048, int frames = 16, int mipLevels = 6)
: tableSize(N), numFrames(frames), numLevels(mipLevels),
fft((int)std::log2((double)N))
{
jassert(juce::isPowerOfTwo((int)N));
tables.resize((size_t)numLevels);
for (int l = 0; l < numLevels; ++l)
tables[(size_t)l].resize((size_t)numFrames, std::vector<float>(tableSize, 0.0f));
}
size_t getSize() const { return tableSize; }
int getFrames() const { return numFrames; }
int getLevels() const { return numLevels; }
// Provide raw “design” frames (time-domain single-cycle) then call buildMipmaps().
// framesRaw.size() must equal numFrames, each frame length must equal tableSize.
void setRawFrames(const std::vector<std::vector<float>>& framesRaw)
{
jassert((int)framesRaw.size() == numFrames);
for (const auto& f : framesRaw) jassert(f.size() == tableSize);
raw = framesRaw;
}
// Convenience: generate 16-frame bank morphing Sine -> Saw -> Square -> Triangle
void generateDefaultMorph()
{
std::vector<std::vector<float>> frames;
frames.resize((size_t)numFrames, std::vector<float>(tableSize, 0.0f));
auto fill = [&](int idx, auto func)
{
auto& t = frames[(size_t)idx];
for (size_t n = 0; n < tableSize; ++n)
{
const float ph = (float) (juce::MathConstants<double>::twoPi * (double)n / (double)tableSize);
t[n] = func(ph);
}
normalise(t);
};
// helper waves
auto sine = [](float ph) { return std::sin(ph); };
auto saw = [](float ph) { return (float)(2.0 * (ph / juce::MathConstants<float>::twoPi) - 1.0); };
auto sq = [](float ph) { return ph < juce::MathConstants<float>::pi ? 1.0f : -1.0f; };
auto tri = [](float ph) {
float v = (float)(2.0 * std::abs(2.0 * (ph / juce::MathConstants<float>::twoPi) - 1.0) - 1.0);
return v;
};
// 0..5: sine->saw, 6..10: saw->square, 11..15: square->triangle
const int F = numFrames;
for (int i = 0; i < F; ++i)
{
const float t = (float) i / (float) juce::jmax(1, F - 1);
std::function<float(float)> a, b;
float mix = 0.0f;
if (i <= 5) { a = sine; b = saw; mix = (float)i / 5.0f; }
else if (i <=10) { a = saw; b = sq; mix = (float)(i - 6) / 4.0f; }
else { a = sq; b = tri; mix = (float)(i - 11) / 4.0f; }
fill(i, [=](float ph){ return (1.0f - mix) * a(ph) + mix * b(ph); });
}
setRawFrames(frames);
}
// Build mip-levels by FFT → spectral truncation → IFFT
void buildMipmaps()
{
jassert(!raw.empty());
ComplexBuf freq(tableSize);
ComplexBuf time(tableSize);
for (int f = 0; f < numFrames; ++f)
{
// Forward FFT of raw frame
std::fill(freq.data.begin(), freq.data.end(), 0.0f);
for (size_t n = 0; n < tableSize; ++n)
{
time.data[2 * n + 0] = raw[(size_t)f][n];
time.data[2 * n + 1] = 0.0f;
}
fft.performRealOnlyForwardTransform(time.data.data());
// After JUCE real FFT, bins are laid out as: Re[0], Re[N/2], Re[1], Im[1], Re[2], Im[2], ...
// We'll reconstruct complex bins for easy masking.
// Helper to zero all harmonics above kMax (inclusive index in [0..N/2])
auto maskAndIFFT = [&](int level, int kMax)
{
// Copy time.data into working complex bins
auto* bins = freq.asComplex();
// DC & Nyquist are purely real in real-FFT
bins[0].real (time.data[0]);
bins[0].imag (0.0f);
bins[tableSize/2].real (time.data[1]);
bins[tableSize/2].imag (0.0f);
// Rebuild the rest (Re[k], Im[k]) packed starting at index 2
for (size_t k = 1; k < tableSize/2; ++k)
{
bins[k].real (time.data[2 * k + 0]);
bins[k].imag (time.data[2 * k + 1]);
}
// Mask
for (size_t k = (size_t)kMax + 1; k < tableSize/2; ++k)
bins[k] = { 0.0f, 0.0f };
// Pack back into real-FFT layout for inverse
time.data[0] = bins[0].real(); // DC
time.data[1] = bins[tableSize/2].real(); // Nyquist
for (size_t k = 1; k < tableSize/2; ++k)
{
time.data[2 * k + 0] = bins[k].real();
time.data[2 * k + 1] = bins[k].imag();
}
// IFFT
fft.performRealOnlyInverseTransform(time.data.data());
// Copy, normalise a little (scale JUCE inverse divides by N already)
auto& dst = tables[(size_t)level][(size_t)f];
for (size_t n = 0; n < tableSize; ++n)
dst[n] = time.data[2 * n + 0];
normalise(dst);
};
// Level 0 → all harmonics available up to N/2 - 1
for (int l = 0; l < numLevels; ++l)
{
const int maxH = (int)((tableSize / 2) >> l); // halve per level
const int kMax = juce::jmax(1, juce::jmin(maxH, (int)tableSize/2 - 1));
maskAndIFFT(l, kMax);
}
}
}
// sample at (frame, level, phase in [0,1))
inline float lookup (float frameIdx, int level, float phase) const noexcept
{
const int f0 = juce::jlimit(0, numFrames - 1, (int)std::floor(frameIdx));
const int f1 = juce::jlimit(0, numFrames - 1, f0 + 1);
const float t = juce::jlimit(0.0f, 1.0f, frameIdx - (float)f0);
const auto& T0 = tables[(size_t)level][(size_t)f0];
const auto& T1 = tables[(size_t)level][(size_t)f1];
const float pos = phase * (float)tableSize;
const int i0 = (int) std::floor(pos) & (int)(tableSize - 1);
const int i1 = (i0 + 1) & (int)(tableSize - 1);
const float a = pos - (float) std::floor(pos);
const float s0 = juce::jmap(a, T0[(size_t)i0], T0[(size_t)i1]);
const float s1 = juce::jmap(a, T1[(size_t)i0], T1[(size_t)i1]);
return juce::jmap(t, s0, s1);
}
// choose mip-level for given frequency (Hz) & sampleRate
inline int chooseLevel (float freq, double sampleRate) const noexcept
{
// permitted harmonics at this pitch:
const float maxH = (float) (0.5 * sampleRate / juce::jmax(1.0f, freq));
// level so that harmonic budget of level >= maxH, i.e. l = ceil(log2((N/2)/maxH))
const float base = (float)(tableSize * 0.5);
const float ratio = base / juce::jmax(1.0f, maxH);
int l = (int) std::ceil (std::log2 (ratio));
return juce::jlimit (0, numLevels - 1, l);
}
static void normalise (std::vector<float>& t)
{
float mx = 0.0f;
for (float v : t) mx = juce::jmax(mx, std::abs(v));
if (mx < 1.0e-6f) return;
for (float& v : t) v /= mx;
}
private:
size_t tableSize;
int numFrames;
int numLevels;
juce::dsp::FFT fft;
std::vector<std::vector<float>> raw;
// [level][frame][sample]
std::vector<std::vector<std::vector<float>>> tables;
};
// =======================================================================
// Wavetable Oscillator
// =======================================================================
class Osc
{
public:
void prepare (double sr) { sampleRate = sr; }
void setBank (std::shared_ptr<Bank> b) { bank = std::move(b); }
void setFrequency (float f) { freq = juce::jmax(0.0f, f); phaseInc = freq / (float)sampleRate; }
void setMorph (float m) { morph = m; } // 0..frames-1 (continuous)
void resetPhase (float p = 0.0f) { phase = juce::jlimit(0.0f, 1.0f, p); }
float process()
{
if (!bank) return 0.0f;
const int l0 = bank->chooseLevel(freq, sampleRate);
const int l1 = juce::jmin(l0 + 1, bank->getLevels() - 1);
const float preferL0 = 1.0f - juce::jlimit(0.0f, 1.0f,
(float)l0 - (float)bank->chooseLevel(freq * 0.99f, sampleRate));
const float s0 = bank->lookup(morph, l0, phase);
const float s1 = bank->lookup(morph, l1, phase);
const float out = juce::jmap(preferL0, s1, s0); // simple crossfade
phase += phaseInc;
while (phase >= 1.0f) phase -= 1.0f;
return out;
}
private:
std::shared_ptr<Bank> bank;
double sampleRate { 44100.0 };
float freq { 0.0f };
float morph { 0.0f }; // 0..frames-1
float phase { 0.0f };
float phaseInc { 0.0f };
};
} // namespace WT
#pragma once
#include <JuceHeader.h>
#include <vector>
#include <cmath>
// ============================== Design =======================================
// - Bank with F frames, each frame is a single-cycle table of N samples.
// - For each frame, we create L mip-levels: level 0 = full bandwidth,
// level l halves the permitted harmonics (spectral truncation).
// - Runtime chooses level from note frequency and sampleRate, then morphs
// between adjacent frames and crossfades between the two nearest levels.
// - Table read uses linear interpolation (cheap and good enough with N>=2048).
namespace WT
{
// Utility: complex array wrapper for JUCE FFT (interleaved real/imag floats)
struct ComplexBuf
{
std::vector<float> data; // size = 2 * N
explicit ComplexBuf(size_t N = 0) { resize(N); }
void resize(size_t N) { data.assign(2 * N, 0.0f); }
juce::dsp::Complex<float>* asComplex() { return reinterpret_cast<juce::dsp::Complex<float>*>(data.data()); }
};
// =======================================================================
// WavetableBank: holds raw frames + mipmapped versions
// =======================================================================
class Bank
{
public:
// N = table length (must be power-of-two for FFT), frames = number of morph frames
// mipLevels = how many spectral levels (>=1). 5 ~ 6 is plenty for synth use.
Bank(size_t N = 2048, int frames = 16, int mipLevels = 6)
: tableSize(N), numFrames(frames), numLevels(mipLevels),
fft((int)std::log2((double)N))
{
jassert(juce::isPowerOfTwo((int)N));
tables.resize((size_t)numLevels);
for (int l = 0; l < numLevels; ++l)
tables[(size_t)l].resize((size_t)numFrames, std::vector<float>(tableSize, 0.0f));
}
size_t getSize() const { return tableSize; }
int getFrames() const { return numFrames; }
int getLevels() const { return numLevels; }
// Provide raw “design” frames (time-domain single-cycle) then call buildMipmaps().
// framesRaw.size() must equal numFrames, each frame length must equal tableSize.
void setRawFrames(const std::vector<std::vector<float>>& framesRaw)
{
jassert((int)framesRaw.size() == numFrames);
for (const auto& f : framesRaw) jassert(f.size() == tableSize);
raw = framesRaw;
}
// Convenience: generate 16-frame bank morphing Sine -> Saw -> Square -> Triangle
void generateDefaultMorph()
{
std::vector<std::vector<float>> frames;
frames.resize((size_t)numFrames, std::vector<float>(tableSize, 0.0f));
auto fill = [&](int idx, auto func)
{
auto& t = frames[(size_t)idx];
for (size_t n = 0; n < tableSize; ++n)
{
const float ph = (float) (juce::MathConstants<double>::twoPi * (double)n / (double)tableSize);
t[n] = func(ph);
}
normalise(t);
};
// helper waves
auto sine = [](float ph) { return std::sin(ph); };
auto saw = [](float ph) { return (float)(2.0 * (ph / juce::MathConstants<float>::twoPi) - 1.0); };
auto sq = [](float ph) { return ph < juce::MathConstants<float>::pi ? 1.0f : -1.0f; };
auto tri = [](float ph) {
float v = (float)(2.0 * std::abs(2.0 * (ph / juce::MathConstants<float>::twoPi) - 1.0) - 1.0);
return v;
};
// 0..5: sine->saw, 6..10: saw->square, 11..15: square->triangle
const int F = numFrames;
for (int i = 0; i < F; ++i)
{
const float t = (float) i / (float) juce::jmax(1, F - 1);
std::function<float(float)> a, b;
float mix = 0.0f;
if (i <= 5) { a = sine; b = saw; mix = (float)i / 5.0f; }
else if (i <=10) { a = saw; b = sq; mix = (float)(i - 6) / 4.0f; }
else { a = sq; b = tri; mix = (float)(i - 11) / 4.0f; }
fill(i, [=](float ph){ return (1.0f - mix) * a(ph) + mix * b(ph); });
}
setRawFrames(frames);
}
// Build mip-levels by FFT → spectral truncation → IFFT
void buildMipmaps()
{
jassert(!raw.empty());
ComplexBuf freq(tableSize);
ComplexBuf time(tableSize);
for (int f = 0; f < numFrames; ++f)
{
// Forward FFT of raw frame
std::fill(freq.data.begin(), freq.data.end(), 0.0f);
for (size_t n = 0; n < tableSize; ++n)
{
time.data[2 * n + 0] = raw[(size_t)f][n];
time.data[2 * n + 1] = 0.0f;
}
fft.performRealOnlyForwardTransform(time.data.data());
// After JUCE real FFT, bins are laid out as: Re[0], Re[N/2], Re[1], Im[1], Re[2], Im[2], ...
// We'll reconstruct complex bins for easy masking.
// Helper to zero all harmonics above kMax (inclusive index in [0..N/2])
auto maskAndIFFT = [&](int level, int kMax)
{
// Copy time.data into working complex bins
auto* bins = freq.asComplex();
// DC & Nyquist are purely real in real-FFT
bins[0].real (time.data[0]);
bins[0].imag (0.0f);
bins[tableSize/2].real (time.data[1]);
bins[tableSize/2].imag (0.0f);
// Rebuild the rest (Re[k], Im[k]) packed starting at index 2
for (size_t k = 1; k < tableSize/2; ++k)
{
bins[k].real (time.data[2 * k + 0]);
bins[k].imag (time.data[2 * k + 1]);
}
// Mask
for (size_t k = (size_t)kMax + 1; k < tableSize/2; ++k)
bins[k] = { 0.0f, 0.0f };
// Pack back into real-FFT layout for inverse
time.data[0] = bins[0].real(); // DC
time.data[1] = bins[tableSize/2].real(); // Nyquist
for (size_t k = 1; k < tableSize/2; ++k)
{
time.data[2 * k + 0] = bins[k].real();
time.data[2 * k + 1] = bins[k].imag();
}
// IFFT
fft.performRealOnlyInverseTransform(time.data.data());
// Copy, normalise a little (scale JUCE inverse divides by N already)
auto& dst = tables[(size_t)level][(size_t)f];
for (size_t n = 0; n < tableSize; ++n)
dst[n] = time.data[2 * n + 0];
normalise(dst);
};
// Level 0 → all harmonics available up to N/2 - 1
for (int l = 0; l < numLevels; ++l)
{
const int maxH = (int)((tableSize / 2) >> l); // halve per level
const int kMax = juce::jmax(1, juce::jmin(maxH, (int)tableSize/2 - 1));
maskAndIFFT(l, kMax);
}
}
}
// sample at (frame, level, phase in [0,1))
inline float lookup (float frameIdx, int level, float phase) const noexcept
{
const int f0 = juce::jlimit(0, numFrames - 1, (int)std::floor(frameIdx));
const int f1 = juce::jlimit(0, numFrames - 1, f0 + 1);
const float t = juce::jlimit(0.0f, 1.0f, frameIdx - (float)f0);
const auto& T0 = tables[(size_t)level][(size_t)f0];
const auto& T1 = tables[(size_t)level][(size_t)f1];
const float pos = phase * (float)tableSize;
const int i0 = (int) std::floor(pos) & (int)(tableSize - 1);
const int i1 = (i0 + 1) & (int)(tableSize - 1);
const float a = pos - (float) std::floor(pos);
const float s0 = juce::jmap(a, T0[(size_t)i0], T0[(size_t)i1]);
const float s1 = juce::jmap(a, T1[(size_t)i0], T1[(size_t)i1]);
return juce::jmap(t, s0, s1);
}
// choose mip-level for given frequency (Hz) & sampleRate
inline int chooseLevel (float freq, double sampleRate) const noexcept
{
// permitted harmonics at this pitch:
const float maxH = (float) (0.5 * sampleRate / juce::jmax(1.0f, freq));
// level so that harmonic budget of level >= maxH, i.e. l = ceil(log2((N/2)/maxH))
const float base = (float)(tableSize * 0.5);
const float ratio = base / juce::jmax(1.0f, maxH);
int l = (int) std::ceil (std::log2 (ratio));
return juce::jlimit (0, numLevels - 1, l);
}
static void normalise (std::vector<float>& t)
{
float mx = 0.0f;
for (float v : t) mx = juce::jmax(mx, std::abs(v));
if (mx < 1.0e-6f) return;
for (float& v : t) v /= mx;
}
private:
size_t tableSize;
int numFrames;
int numLevels;
juce::dsp::FFT fft;
std::vector<std::vector<float>> raw;
// [level][frame][sample]
std::vector<std::vector<std::vector<float>>> tables;
};
// =======================================================================
// Wavetable Oscillator
// =======================================================================
class Osc
{
public:
void prepare (double sr) { sampleRate = sr; }
void setBank (std::shared_ptr<Bank> b) { bank = std::move(b); }
void setFrequency (float f) { freq = juce::jmax(0.0f, f); phaseInc = freq / (float)sampleRate; }
void setMorph (float m) { morph = m; } // 0..frames-1 (continuous)
void resetPhase (float p = 0.0f) { phase = juce::jlimit(0.0f, 1.0f, p); }
float process()
{
if (!bank) return 0.0f;
const int l0 = bank->chooseLevel(freq, sampleRate);
const int l1 = juce::jmin(l0 + 1, bank->getLevels() - 1);
const float preferL0 = 1.0f - juce::jlimit(0.0f, 1.0f,
(float)l0 - (float)bank->chooseLevel(freq * 0.99f, sampleRate));
const float s0 = bank->lookup(morph, l0, phase);
const float s1 = bank->lookup(morph, l1, phase);
const float out = juce::jmap(preferL0, s1, s0); // simple crossfade
phase += phaseInc;
while (phase >= 1.0f) phase -= 1.0f;
return out;
}
private:
std::shared_ptr<Bank> bank;
double sampleRate { 44100.0 };
float freq { 0.0f };
float morph { 0.0f }; // 0..frames-1
float phase { 0.0f };
float phaseInc { 0.0f };
};
} // namespace WT