Giter VIP home page Giter VIP logo

beethoven's People

Contributors

glaurent avatar vadymmarkov avatar wlixcc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beethoven's Issues

Can not read audio from file.

When I specify the url variable to Config object, it will call OutputSignalTracker.start()function, but it can not enter this function and exit the program:
audioEngine.outputNode.installTap(onBus: bus, bufferSize: bufferSize, format: nil) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
DispatchQueue.main.async {
self.delegate?.signalTracker(self, didReceiveBuffer: buffer, atTime: time)
}}

Update for Swift 3.1?

Hi @vadymmarkov, It's been a while since I've tried to compile my app that depends on this framework, and I'm getting an error when I try to import Pitchy: "Module compiled with Swift 3.0.2 cannot be imported in Swift 3.1"

screen shot 2017-06-20 at 2 31 24 pm

I also tried updating Beethoven with Carthage and got the following error:

carthage update beethoven
*** Fetching Beethoven
*** Fetching Pitchy
*** Fetching Quick
*** Fetching Nimble
*** Checking out Nimble at "v5.1.1"
*** Checking out Quick at "v1.0.0"
*** Checking out Pitchy at "2.0.1"
*** Checking out Beethoven at "3.0.1"
*** xcodebuild output can be found in /var/folders/bm/25m_zbpn1ys21pt5xc004p6h0000gp/T/carthage-xcodebuild.PYOJL6.log
*** Building scheme "Nimble-iOS" in Nimble.xcodeproj
*** Building scheme "Nimble-tvOS" in Nimble.xcodeproj
** BUILD FAILED **


The following build commands failed:
	CompileSwift normal x86_64
	CompileSwiftSources normal x86_64 com.apple.xcode.tools.swift.compiler
(2 failures)

I'm running Xcode 8.3.3

Any change you could compile and rerelease for Swift 3.1?

Thanks!

-Paul

Swift 3.0 support for Example

Hey man,

Great library, I'm looking forward to use it however it looks like there is a discrepancy between the library and the example that you put in.

Can you put in the fix for that.

Thanks.

Hang Risk with Beethoven > InputSignalTracker.swift > start()

When running GuitarTuner on multiple, different iOS devices (iPhone, iPad), there is a Hang Risk issue raised each time InputSignalTracker's start() method is called.

  • line of code: 84
  • code: captureSession.startRunning()
  • issue: -[AVCaptureSession startRunning] should be called from background thread. Calling it on the main thread can lead to UI unresponsiveness
  • Build Settings - Swift Language Version: Swift 5
  • iOS Deployment Target: iOS 12.0
  • Xcode Version: 14.2
  • iPhone & iPad iOS Version: 16.2

This also happens with my own app when using Beethoven audio processing Swift library.

Example throws exception

Hi, I am trying to test the guitarTurner, but captureSession.startRunning() is throwing a exception.

I am on Xcode8.

screen shot 2016-10-26 at 3 09 40 pm

A way to monitor mic input level and set a threshold on it ?

Hi,

Trying the Guitar Tuner example, I find that it detects pitches just out of the base level of noise that the microphone gets. So I'm looking for a way to "filter out" low level signals so that detection would only occur if the input level is above a certain value. I don't know much about AVAudio yet, so I'm asking if you have an idea on how to do this (or are planning to do it) before I dive into docs :)

Thanks

Target watchOS

Hi,

I was just going through your library (which is one of the best I've come across in monophonic pitch detection). I was thinking if its possible to port it to work with watchOS target?
I would be happy to do it, although I'm very much a beginner in swift. I will just need some guidance for where to start.😅

Question: why is the AudioSession category 'playAndRecord', instead of just 'record' ?

Hello. First of all I want to thank you for developing such an amazing library.

I was going through the source code (inside the Source folder) to learn how the library is implemented. I understand all the source code and what's going one, except for two questions I've got:

  1. Check here. Why is the AudioSession category playAndRecord ? I mean, this library doesn't do playback at all. However, if I change the category to record only, an error is thrown at runtime.

  2. Check here. What is going on? The comment says //Check input type .... but it's clearly referencing currentRoute.outputs.... so is it input or output?
    I checked Apple documentation. It says this about overrideOutputAudioPort:

If your app uses the playAndRecord category, calling this method with the AVAudioSession.PortOverride.speaker option causes the system to route audio to the built-in speaker and microphone regardless of other settings. This change remains in effect only until the current route changes or you call this method again with the AVAudioSession.PortOverride.none option.

I understand the intent of the code. If we don't have any headphones (with microphone) plugged in, force the system to use the built-in mic. However, I think this falls back to these questions: 1. isn't this the default behaviour? and 2. why use category playAndRecord ?

Thanks!

Individual note tracking

Hi @vadymmarkov, first of all, thanks for this awesome library 🎹 🎉! It's been really helpful start understanding how pitch detection works.

My issue is that I'm trying to get individual notes in a sequence without one note being recognized multiple times. If I play say C4 for 1 second I'll get 5/6 calls to PitchEngineDelegate's func pitchEngineDidReceivePitch(_ pitchEngine: PitchEngine, pitch: Pitch).

Is there a way to avoid this? Or some sort of comparison to detect if the exact same note is being played without interruptions? The goal is to be able to play a few individual notes (can be the same or different notes) and get the exact output like: A - A - C# - F - E

Can not access in Objective C

I have integrated Pod in Objective C.
It Includes successfully. But When i am trying to create PitchEngine object it not allowed to create object, it throws compile time error.

tvOS target

I'd love to use this on tvOS. Apart from InputSignalTracker is there anything you'd have to cut? I'm guessing it shouldn't be too tricky to add a tvOS target, you probably just didn't get around to it (yet?) 🙏

use in Unity

Hi there !
Sorry i am new to this, but I am desperate in finding a good solution for my pitch detection game made with unity.
I would love to use Beethoven in Unity if there is a way to do so ?
Any advice would be much appreciated.
best,

crash in HPSEstimator.estimateLocation due to wrong range values

Having updated to the current master, I seem to now often encounter a problem in HPSEstimator.estimateLocation. The following line

for i in (minIndex + 1)..<maxsearch {
      if spectrum[i] > spectrum[max2] {
      ...

crashes because minIndex = 20 (at least in both cases I had that crash) and maxsearch is <20 (15 or 17).

Swift 3 version ?

Just for information, is there a Swift 3 version in the works ? If not I might go ahead and try converting it...

Requires `start` twice

By some reason mic recognition start only when start called twice:

class ProcessingUnit: NSObject {
  
  var pitchChangedHandler: ((String) -> ())?
  
  let pitchEngine: PitchEngine = {
    let config = Config()
    let engine = PitchEngine(config: config, delegate: nil)
    return engine
  }()
		
  override init() {
    super.init()
    pitchEngine.delegate = self
    pitchEngine.start()
    pitchEngine.start()
  }

}

extension ProcessingUnit: PitchEngineDelegate {
  
  func pitchEngineDidReceiveError(_ pitchEngine: PitchEngine, error: Error) {
    
  }
  
  func pitchEngineWentBelowLevelThreshold(_ pitchEngine: PitchEngine) {
    
  }
  
  func pitchEngineDidReceivePitch(_ pitchEngine: PitchEngine, pitch: Pitch) {
    let value = pitch.note.letter.rawValue + "\(pitch.note.octave)"
    pitchChangedHandler?(value)
  }
}

compile error

hi, I have got sevaral compile errors ,please help me.
image
image
image
image

New example

Hey,

excellent library, there are other examples ??

I would like to create an analyzer of the item using your library.

I do not find simple examples that have mic input and output the F0 using YIN or other algorithm.
Thank you

Crash upon engine start

I am getting multiple instances of the same fatal exception, across iOS 12 and 13.

Fatal Exception: com.apple.coreaudio.avfaudio
required condition is false: IsFormatSampleRateAndChannelCountValid(format)

The app is crashing when the pitch engine starts. Here is the relevant code:

private let engine = PitchEngine()

override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)
    engine.start()
}

I found a stack overflow post referencing this crash, but it seems to be marginally helpful at best.

I dug around a bit in the Beethoven code, and traced this engine.start() call back to the audio session setup:

https://github.com/vadymmarkov/Beethoven/blob/master/Source/SignalTracking/Units/InputSignalTracker.swift#L41

Is this something you've seen before? Any ideas on how to resolve? Thanks

Support for buffer overlap?

I'm loving Beethoven. Great library.

Currently, the pitch engine processes the incoming audio stream in consecutive, separate buffers. There's no overlap in the data between buffers. So for a 44.1kHz input stream and a standard 4096 sample size, the engine returns about ten results per second to the delegate.

I'd like to see support for buffers with overlapping samples. For example, I could specify a sample size of 4096 but an overlap of 2048 samples. Then the pitch engine would pass overlapping windows of audio data to the estimator. In this example, the estimator would end up doing twice as much work, but it would return twice as many results per second with full accuracy.

A basic implementation of this could be done entirely in the existing PitchEngine class, without altering the interface used by the Estimator. A more advanced implementation could be done down the road using algorithms better suited for real-time audio processing (i.e. adding incremental data without reprocessing the whole window).

Would this kind of feature be welcomed in this project? If so, I could write this real quick and submit a pull request.

MacOS Target

Are there any plans to add a MacOS target? I'd love to use this for a desktop application I am building.

Beethoven crashes sometimes when stopping PitchEngine

First of all, great library -- it's really helpful.
One problem is puzzling me right now:

screen shot 2017-12-24 at 7 32 57 pm

I get this error sometimes (NOT ALWAYS) when I try to stop the pitch engine, which leads me to think it's some sort of memory issue.

The offending code:

screen shot 2017-12-24 at 7 32 06 pm

Any advice regarding this (especially if the problem's on my end) would be appreciated!

Maintained?

This doesn't seem to be maintained anymore. Recommendations for suitable replacements would be helpful in this request & to leave open for discoverability. And/or add to README documentation

pointer to other algorithms ?

(Not really an issue)

From my "real-world" tests, the default configuration works very well with an electric guitar plugged in an amp with no effects. However, I tried with a acoustic one, and while pitch detection is still accurate for the higher strings, it's consistently wrong with the lower ones (open low E is detected as B3, open A is E4...). Do you have any pointers to other pitch detection algorithms that would be worth looking into ? I've already skimmed through the wikipedia page.

(add: the Fender Tune app works perfectly with that same guitar).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.