Giter VIP home page Giter VIP logo

Comments (28)

jmscraig avatar jmscraig commented on July 2, 2024 1

In the last 15,000 contract trade executions there have not been any deadlocks.

So far the updates look great but have one more multi-client test cycle to execute to validate the issue closed.

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024 1

I will have proposed changes around recent content in the issue threads to submit in a pull request for both AlgoSystemBase and the sample strategy.

Not nearly as broad in scope as the last pull request. I will migrate the changes to copies of the freshest files I see on the main branch just before submitting the Pull-Request.

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024 1

Sorry in advance for the ramblings
"Protect us from List editing and reading conflicts or to ensure only one cycle of the workflow in the case structure can execute at one time?"
Great question that certainly hits the nail on the head.
The main process and case structure is handled in 1 method.
ProcessWorkflow
...

#2 ProcessWorkflow called from a Q of workflow states
Workflow State calls are enQd... if calls to processWorkflow are already executing we wait and then execute. E.G. If there is only 1 proposed element in the Q it is executed immediately otherwise a separate thread deals with later- (consider an Async call for the immediate execution -however in the past this proved unreliable, NT stops working etc)
This means the current logic in case structure does not need to change - but we need a mechanism to process the q and then call into the process. This does allow the ability to use LIFO at the q level and purge other prior calls...

I would surmise the q works off OnMArketUpdate or a Dispatcher Timer as 2nd choice or a combo of both

#3 Change OnOrderUpdate remove the lock, add in more ideal tracing
OnOrderUpdate should be tested without the lock to assume it is called only after it returns.
“Optimizations pre-process” are used to limit calls in

Conclusion
I favor #3 as #1 is not tenable and #2 might be overkill or more problematic and require iterative polishing etc. "

Thanks for this reply.

I saw you migrated to a concurrent Q in the latest commit. Good show.

I integrated the few changes I am testing for AlgoSystemBase into code from this new commit and will be using (ergo testing) that code for my next work.

"E.G. If there is only 1 proposed element in the Q it is executed immediately otherwise a separate thread deals with later- (consider an Async call for the immediate execution -however in the past this proved unreliable, NT stops working etc)"

Seems like could be very scalable.. two shared queues.. One normal priority and one very small Queue Super High Priority for workflow tasks like 1) monitoring OExU() for order execution signaling time to add or take off orders in response, hyper-responsive monitoring and execution to ensure Positions go fully flat or all orders really did cancel, etc.

To create a second small super-high priority queue could add a second ConcurrentQueue or do as simple as use 3-5 class level string fields to host the names of the very few top priority workflow tasks awaiting execution.

I, like you, have not seen good results from trying to go too far on multi-threaded within the NT8 platform. I seem to have more reliable and faster results trying to treat workflow on each strategy thread as non-multithreaded even if it is not explicitly coded that way.

------ Scalable and Multi-threaded ---

I try to not build Sync dependencies ACROSS: I do everything I can to avoid Async and so far it all seems doable (e.g. share data across indicators and strategies using public doubles, dataseries and arrays not integrated with ".this" non-integrated in properties rather than dataseries following the full default lets keep it all well synced plan).

Go Along with the NT Pitched Plan: After trying a lot of things that were really hard to do in NT I have a draft conclusion that says I should try to drink the KoolAid and leverage the approaches NT suggests we adopt, as much as is reasonable and works well as (which of course excludes a lot of Async dependencies). Though it may not have started solid by now NT has not done a bad job with the pitched lifecycle approach of ordering on OBU events or OMktU events, monitoring order status order name assignment via OnOrderUpdate and monitoring Order Execution fill quantities and order labeling in OnExecutionUpdate().

"> I would surmise the q works off OnMArketUpdate or a Dispatcher Timer as 2nd choice or a combo of both"

I vote for COMBO approach. This occurred to me while trying to debug the Rejections for Cancel Pending in CancelAllOrders() >> Yes timer for delay and action are good but nothing will help us improve reliability, speed and accuracy of the ClosePosition() and CancelAllOrders() methods more than monitoring order event updates in OnOrderUpdate() and OnExecutionUpdate() for accuracy and for the trigger for the next step in the process. So Combo.. scheduled and executed by default OMU() and timers AND when info the workflow is waiting on appears in OOU() or OEU() from the authoritative event handler immediately trigger the next workflow step (e.g. Creating Stop and Profit Orders, re-executing CancelAllOrders(), etc.).

VERTICAL is interesting: NT has worked hard to designed our available NT8 platform to work well in Parent class-Child Class relationships. I am intrigued from early testing using a Parent Class-Child Class approach ( Abstract Class still has a role) as a (possibly multi-threaded) workload simplification and balancing model through isolation.

The Parent Class loads all shared class level fields, Arrays, Double Series etc. and does everything a normal indicator or strategy will do. Under the Parent and inheriting direct access to all this goodness are multiple Child classes that also each can execute thier own unique isolated a full OnStateChange() cycle, load unique vars, collections and AddDataSeries() BarArrays, allow you to load just one very long data dataseries or BarsRequest without requiring all DataSeries loaded to be any bigger than they need to be. Allow intermittent enabling and disabling of the update subscriptions on a wide number of BarsRequest series so you can use just the data you need when you need it overloading NT8. Child classes have their own OBU() etc. events that are by default isolated and not visible to the Parent or other Children. This leaves me interested in looking a model of a "Child Class per strategy" plug in-pull-out scalable modular approach to design test and execute many strategies on-top of a single shared work-flow execution engine. The ability to put in or pull out strategies with no need to flatten variable name space or create direct interdependencies between strategies along has me intrigued.

Gotta go.. I will reply in another post about an idea I have been thinking through to gain more decoupled true Multi-threaded, Multi-processor help for our work.

Your Thoughts?

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024 1

I gotta go but a snip on

----- Scalable and Multi-threaded & Out of Band ----

The same way you have written external code accessing NT8 account APIs or via an Indicator attached to the chart.

Performance and reliability of my current strategy with the complex algos and the highest volume of fast orders in an out was killed by the the increasing dependence on locks broad in scope and Async ... over indirect event handlers (e.g. OnAccountUpdate... ) choking, bogging down whole strategy trying to keep up with all the updates and still being free to serve out data


In response ...

In short I am eager to pursue implementation of an Unsynced, Out of Band separate Indicator hosting all pragmatically possible services that are only indirectly involved in order execution (e.g. OnAccountItemUpdate(), OnConnectionStatusUpdate, assessing overall market direction, etc.. )

Async is not required because I have no dependencies on Bar Chart synced data.

BarsArray[0] and BarsArray[1] indicator host instruments I already need to access to ascertain overall market direction but have not been loaded into my strategy directly >>> to encourage NT8 to host this indicator on a CPU core different from where my strategy is running.

BarsArray[n] is loaded with the instrument I am trading but not a bars period that has been loaded in my strategy. (e.g. The strategy runs on Ticks and Seconds dataseries (even to make up 15min bars, etc. and indicator host Daily or Min dataseries of my instruments traded)

Account-wide Risk Management Overwatch requires access generally to the same Account, Performance Metrics and Position details as Risk Management execution for a strategy. The Indicator or external app for the Strategy also acts as one of the two geographic instances the Account-wide Risk Management Overwatch functionality for this instrument.

Indicator or external app refresh is set to ~OnPriceChange so has ready access to details on instrument state, my account, etc.

Workload related to the total number of Event Update subscriptions constantly processed by the Parent Class is reduced.

Maybe the workflow engine and key event subscriptions are hosted in a Child Class to reduce the distractions the Parent needs to deal with.

Number of Dataseries hosted, update events, workflow processing, method processing, and strategy processing load hosted by the Parent Class are reduced leaving the Parent more available to the core roles or rapid trade execution.


So collectively via isolation through Child Classes and Out of Band processing complexity and processing load, processing delay on Parent Class of the Strategy is greatly reduced and ready for agile tick by tick market awareness and response.

Well at least I can dream it will be that way Lol.

I do not expect it to all work out perfectly as conceived but I do belief execution of the attempt will yield some useful valuable results.

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024 1

"But NT's data feeds and internal data distribution engine are both too slow "

Yes.

There is actually enough value now to use what we have at hand.

Eventually, I envision migrating the small 6-8% of this truly dependent on fast tick by tick data to an external DLL or app that connects to a more expensive and faster tick data stream.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

will need to investigate and revert on this
off the top of my head

Thoughts the lock might in fact be ok to remove or to make a smaller lock
or to return if locked
or to queu the action
but NT8 is not great on multithreading it will stop responding and get locked up itself in code you cant see or debug etc

To understand if it is that lock you can remove and rerun then we know for sure.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

"Also, are you concern about any other areas that might be the root cause of DeadLocks?"
haha well i am of now :-) i will need to run some intensive tests also to replicate.
I would be interested to know the kind of test you are doing.. in fact TBH i am testing with a different strategy derived from it and never even tested that one,. no more than a quick on off !!! oooops

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

Just an idea that might be absurd or not

  • Tick by Tick - and OrderUpdate.
    It would be possible in fact to bypass OnOrderUpdate to a large extent
    instead of being event driven - it sets a flag picked up onMarketData which calls into the ProcessWorkflow
  • if its not already in execution etc
    so the OnOrderUpdate does minimal actions and is super fast

but when is that next tick and so forth.
perhaps a dispatcher timer then etc
or a combo/compromise of all etc

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

"Also, alternatively the very complex ProcessWorkFlow() and follow-through might have logic exits that prevent return of control to OOU()." definitely would be nice to avoid this... an Async call springs to mind - triggercustomEvent perhaps might server purpose or an async action but that might lead to problems wiht Nt8 internally deadlocking.
or a pattern similar to it... that hands off to onMarkeUpdate to call back in, that is really the attempted pattern - used in other cases

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

debug tracing should reveal some insight into what was last called prior to a deadlock..

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

Morning Tom.

"debug tracing should reveal some insight into what was last called prior to a deadlock.."

There is a lot of goodness there. Any tips on how to best quickly use the rich tracing provided?


Any tips on how to best ferret out tracing from a specific chart-strategy combo when you have five up?

I was thinking about adding Instrument + Bar Period in a column early left in the tracing to be able to sort and filter them by running strategy to make them easier to follow.

Somewhere I have code that auto-gens a simple human friendly unique ID per running instance in OSC State.DataLoaded that just combines results from 2-3 random number generation runs.

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

"Thoughts the lock might in fact be ok to remove or to make a smaller lock"

During the overnight test run three of five charts deadlocked so working on this OOU() lock was my next step.

What does the lock do for us? What problem or concern was it placed to protect us from?

Protect us from List editing and reading conflicts or to ensure only one cycle of the workflow in the case structure can execute at one time?

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

Morning -Evening in my case.. 3AM or so ha.

Note: im not sure why i am not using a concurrent queue for the signal q in fact... i assumed i had... that might also be a nice add.

"Any tips on how to best ferret out tracing from a specific chart-strategy combo when you have five up?"
hmm good point... to identify each one - an ID would be a nice add - or even different trace file perhaps.
or to add in system name dataseries and account in the row prefix etc

"What does the lock do for us? What problem or concern was it placed to protect us from?"
Protect us from List editing and reading conflicts or to ensure only one cycle of the workflow in the case structure can execute at one time?

i might need to wade through TFS history to find when i added this... off the top of my head this might have been in fact a belt and braces attempt at solving an error which was in fact a Linq statement error which did not filter out nulls in a select/count and so it might be redundant now.. IsOrdersAllActiveOrWorking and IsOrdersAllActiveOrWorkingOrFilled

i think pull it out and see what goes bang or not...

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

Sorry in advance for the ramblings
"Protect us from List editing and reading conflicts or to ensure only one cycle of the workflow in the case structure can execute at one time?"
Great question that certainly hits the nail on the head.
The main process and case structure is handled in 1 method.
ProcessWorkflow
ProcessWorkflow calls occur via these Events:

  1. OnBarUpDate
  2. OnMarketUpdate
  3. OnOrderUpdate - is this Synchronous to itself..? if so no need for a lock
  4. OnExecUpdate

All events can fire idependantly or each other at anytime.
Thus we assume it must follow they can call into ProcessWorkflow independently in parallel, therefore should the case structure ProcessWorkflow allow multiple access.. ? There is currently no limitation to prevent this.

Assuming the key for the optimal code and execution - is the logic and events should allow events not to call and return swiftly in the case of errors causing an Error state where a recursive retry occurs that is also not blocking.

Optimizations pre-process, should be made so that order submissions must all register and return then enQ for processing. So within OnOrderUpdate and other OnExecutionUpdate we need to only call in should a complete action have arrived or if this point is missed due to missed event message, a TimeOut from on OnMarketdata will spin the workflow again.

ProcessWorkflow Method Overloads
A.Parameter-less call into ProcessWorkflow will merely work off "local property TradeWorkflowState"
local instance state control means additional calls are effectively ignored as they are "complete" nothing to do, or they retry the same case and wait /return or move it forwards.

B. Calls with parameter TradeWorkflowState enum value- will change the flow. What then if calls arriving at the same time or during an existing execution of the case statement? - that is the main concern, sometimes the LIFO effect is desirable as the state will change to reflect the most recent context – such as a reversal or cancel.
Potential Solutions/Ideas:

#1 Change Method Lock with a flag and deflect calls. Add a quick lock and bool Flag to set method busy or unlocked at the begin section of ProcessWorkflow Case Structure.
Caveats- Calls first one in - locks and other calls are ignored and returned. This means that it would need to use case goto structure and not return recursively... so that would be a code breaking change and big impact - what of the discarded state = perhaps then the workflow is then stuck at a state and needs nudging forwards by a monitor method based on state and last state change. This would be undesirable impact and does not feel a good fit.

#2 ProcessWorkflow called from a Q of workflow states
Workflow State calls are enQd... if calls to processWorkflow are already executing we wait and then execute. E.G. If there is only 1 proposed element in the Q it is executed immediately otherwise a separate thread deals with later- (consider an Async call for the immediate execution -however in the past this proved unreliable, NT stops working etc)
This means the current logic in case structure does not need to change - but we need a mechanism to process the q and then call into the process. This does allow the ability to use LIFO at the q level and purge other prior calls...
Caveats:
Perhaps the states in the Q might also be out of date/erroneous by the time they arrive. So the q needs to be purged etc. But when for which states do we allow the q to move again or to purge.
I would surmise the q works off OnMArketUpdate or a Dispatcher Timer as 2nd choice or a combo of both #3

#3 Change OnOrderUpdate remove the lock, add in more ideal tracing
OnOrderUpdate should be tested without the lock to assume it is called only after it returns.
“Optimizations pre-process” are used to limit calls in

Conclusion
I favor #3 as #1 is not tenable and #2 might be overkill or more problematic and require iterative polishing etc.
No doubt you have some ideas also.

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

Good Post.

Let me add another burden of complexity which might in turn be a blessing with regards to issues you surfaced in the post above.

The Burden: the workflow rules will need to scale to handle more use cases, and unique use cases per implementation. The AlgoSystemBase example included in the repository has a defined set of workflow rules that will not meet all cases.
E.g. some of the current rules.

  • Enter with Market Orders
  • Force the position flat prior to a new entry
  • Allow only one entry per direction
  • Four Profit and Four Stop Orders

I know that for myself I will likely have at least 10 business rule sets for entries and exit workflow management.

The blessing may be that the implementation of scalable rules reduces the complexity burden on the one WorkFlow Case to handle it all.

As I was thinking through how to implement for my rules...

For sure I don't want to execute manual tweaks the default workflow engine and rules.. what a nightmare that would be to keep up with each new release.
So as a first pass a early quick conclusion was to create a class or region that could be joined to the AlgoSystemBase in whole without a repetitive line by line integration work to adjust / integrate into each release.
I would copy the existing workflow engine patterns and naming conventions and processes into two smaller workflow engines each representing major class use case.
By default each workflow event would start at the top of my appropriate engine, executing if the use case was found, and if the use case was not found then drop into the top of the shared existing AlgoSystemBase workflow engine for execution. The it becomes the responsibility of each workflow designer to design the workflows knowing the other engines exist and will randomly run in parallel.

But that was just the first 30 seconds of thinking about it.. I am sure you have surfaced solutions and have thoughts about how to scale the workflow model and I would love to hear them.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

"I was thinking about adding Instrument + Bar Period in a column early left in the tracing to be able to sort and filter them by running strategy to make them easier to follow."
this.Account.Name
" " + this.Name - can label your strategy
+ " " + this.Instrument.FullName
+ " " + (string.IsNullOrEmpty(Thread.CurrentThread.Name) ? "CurrentThread.Name.?" : Thread.CurrentThread.Name);

        if (State > State.DataLoaded && State < State.Terminated)
        {
            txt += "|DS=" + (this.Bars != null ? " " + this.Bars.ToChartString() : string.Empty);"

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

Looks good..

"can label you strategy" when I run multiple instances of the same compiled strategy all instances end up using the same label and so output is indistinguishable.
Instrument.FullName is good because different charts can use different Instruments. .

Adding Bar period to Instrument Name gives pretty good granularity. something like

. " " + this.Name
. + " " + this.Instrument.FullName
. + " " + this.ChartBars.Properties.BarsPeriod.BarsPeriodTypeName
. + " " + this.ChartBars.Properties.BarsPeriod.Value
. + " " + (string.IsNullOrEmpty(Thread.CurrentThread.Name) ? "CurrentThread.Name.?" : Thread.CurrentThread.Name)

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

"But that was just the first 30 seconds of thinking about it.. I am sure you have surfaced solutions and have thoughts about how to scale the workflow model and I would love to hear them."

yes its a very basic static engine for sure...
Enter with Market Orders - you can override the submit and add any type of order - there is OCO stop entry support already.
Force the position flat prior to a new entry - Add a stop or add entry limit -yes i use this also -sometimes with stop and a limit same time,.
Allow only one entry per direction - yes compounding would be good
Four Profit and Four Stop Orders - yes indeed

And that was the intended subject of a different or substantial reworking of this one....
This is really quite a rigid inflexible example compared to what we can do/.
I would write another soon that allows much more scope or make a larger revision to this one etc - the question is what is easier... and what impact in derived models... to abstract out part of this one to allow a different base class for each type of implementation might be the answer...

So i think now there is demand for that it should be undertaken and implemented as i also use and view the same world as you do with those features...

What i use in practice
Then i have my 100% fault tolerant 100% swing trading systems that have the 2 parts - and starts with workspace load.

1 Signal generation/Execution
2 Trade Management
They can be stopped started cancelled server bounced and they wake up and continue and you can position compound and try to break it but it will adapt to the new context -trade manually alongside and it all works it out... that runs off the account object and resides in the market analyzer -and also AddOns can be used - but marketAnalyzer has dataseries support for indicators which is very useful. so exit brackets up to 5 but we can compound and unlimited amount of time etc

workspaces can be loaded via NT8 Addons classes as well as other items loading in sequence such as data etc.

looks like this:
image

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

so to me there are 2 worlds in Nt8
A. Automated day trading/suinday to friday close (assiming it doesnt stop) with NT8 strategy - ability to backtest
B. 100% flexible fault tolerant adaptive true swing/position trading systems - no ability to backtest -set and forget no human required only for rollovers and the odd bit of monitoring and sense checking

99% are in Cat A. and have small accounts and wont even use 4 brackets. 2 more commonly. 1 at 10 ticks and the 2nd a runner to 32 ticks for example. and they switch on and off daily. The small end of the wedge will trade from Sunday to Friday but may or may not hold overnight. They want to backtest and sense check it on a chart -and even have some kind of UI for interaction so that is a later phase for this project also

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

Instrument.FullName is good because different charts can use different Instruments. .

Adding Bar period to Instrument Name gives pretty good granularity. something like

. " " + this.Name
. + " " + this.Instrument.FullName
. + " " + this.ChartBars.Properties.BarsPeriod.BarsPeriodTypeName
. + " " + this.ChartBars.Properties.BarsPeriod.Value
. + " " + (string.IsNullOrEmpty(Thread.CurrentThread.Name) ? "CurrentThread.Name.?" : Thread.CurrentThread.Name)

please see method around line 3300
#region Logging Tracing
public void Print(string msg)

we do have "|DS=" + (this.Bars != null ? " " + this.Bars.ToChartString() : string.Empty)
but it is not added until later due to state - limitations.

we could perhaps then insert that into place

  • " " + this.Account.Name
    + " " + this.Name
    + " " + this.Instrument.FullName
    + " " + (this.Bars != null ? " " + this.Bars.ToChartString() : string.Empty)
    + " " + (string.IsNullOrEmpty(Thread.CurrentThread.Name) ? "CurrentThread.Name.?" : Thread.CurrentThread.Name);

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

Looks like a good really good response. I will read and reply in the morning
As I head to bed just wanted to share some more good news. With zero system errors now 10,000 contract transactions have completed:

  • Using the new precise Lock implementation, some what qualifying that design and implementation

  • Prior to starting in OnOrderUpdate() I completely commented out the onOrderUpdateLockObject, and am not yet seeing any evidence that it is still needed at all ... I am hoping the scope of that Lock was in fact the root cause of the deadlocks because if so we are on a path to closing a lot of issues.

Two test clients are running, one with IsStrategyUnSafe enabled and one not.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

ok ty - i have a fairly large commit to add in - to address a bug and make the debug trade more easy

"As I head to bed just wanted to share some more good news. With zero system errors now 10,000 contract transactions have completed:"

Fantastic thats pretty cool/ and your coding input is very much appreciated also - i'm reading it and saying to myself now why didn't i do that? ;-) The use of bool for and quick lock is very neat also. i have yet to really look at the whitebox code optimization part of it and its nice to see some of that addressed etc,.

Signal Q.
So also news - leave or revert- I opted for a concurrent queue - for signal actions... this i have read is slower -but might be a better fit. If its slower then ok revert back and some light locks and bool pattern would be more appropriate when setting the q -and when querying count and then executing off the count etc.

Print - some improvements there also and the removal of a potential stack overflow.

Transition Exit Bug -complete added in case for this so it executes 1 time only and then stops.... due to caveats of realtime data and historical trades...

Forwards
I intend to add some more layers to this so we have a algo framework with a GUI and off the shelf components and filters/exits etc and for sure i need far more flexibility and advanced methods... and these are the same and some more as you mentioned prior - so im looking to abstract base layers also - so i would add a new one underneath the current one... a base to the base of this model etc

Collaboration and Sharing
it's definitely a very good exercise for me to have shared this and have this level of collaboration and really great to communicate at this level and even get to polish and upgrade the game - this engine is the tip of the iceberg in fact =- i have vaults and decades of code all hidden and protected and im a bit tired of that really - having to hide and secret it away. .
So in fact there are possibly other avenues with code bases that align with both our goals and would benefit from this level of collaboration also but are more private and wrapped in Mutual NDAS etc more on that later.

For now i will use this in a commercial project and loop back to add in the extra features/layers as i go etc

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

You left a number of good posts for me to read through, think about and respond to. All great collaboration.

Have lots to do today but will circle back to all the comments you left across the issues.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

yes some very nice ideas there.
High priority q... very nice ideas intrigued very....
The combo model with OMU and dispatcher timer - i like.

the intrinsic event model cannot also be trusted 100% to be perfect due to connection disruptions and so forth.
if the later does not cause deadlocks or massive slowdowns... Timers and realtime actions - they must be delayed loaded and delayed started -
E.G when the system loads data and it transitions into realtime that is not realtime yet... that is catch up with the Realtime ticks phase and play them - only when the tick time stamp catches up with real-time can you consider tjat as event IsRealtimeProperFirst - then from this start the dispatcher or to start processing within OMU -or you will see a bottleneck as the ticks/render delays catch up.

etc i use them very sparsely and removed almost all of them in other foundations in particular where they are loaded into market analyzer... there are many caveats to be tested so the dispatcher is only if really needed - i used system timers to great effect in NT7 but Nt8 not so much... with hindsight it might be just that my code at that point had not factored in some nuances i use now in other systems for smooth operation etc. i have a tendency to use them for controlling connections loading worksoaces downloading data or updating a GUI - but should they prove to be ok.. then use them for q processing or validation etc and safety and gui updates is a nice fit instead of tying on OMU

Got a deadlock on a project ha... did i bust something...?
will resolve and feedback

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

"if the later does not cause deadlocks or massive slowdowns"
I like the effects seen by auto-throttling frequency of all workflow, and dynamically disabling workflow I would not use anyway (e.g. testing for entries) when DataLag increases.

Conversely, when datalag is low and volume is low is a great time to initiate a bunch of admin routines for a few Milliseconds.

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

"I like the effects seen by auto-throttling frequency of all workflow, and dynamically disabling workflow I would not use anyway (e.g. testing for entries) when DataLag increases." wow now that is nice.... yes indeed...

so we need to measure the delta from tick datetime and realtime now -and have that as a public local
and a property IsFastMarket or isVeryFastMarket or isMarketLag etc. could be based on an interval t = 3 seconds for example and is variable.

So that really answers the Unsafe mode conundrum -
it could indeed make use of that type of pattern.- check after the position has been filled and events have fired
start to check that orders and positions are correct and exit order balances and lock or allow next trade
so that would be a FastSafeMode if we guard/monitor in UnsafeMode etc

  • in a slow market thats very easy the problem is fast as you say-
    so then it could sit out and wait for a window so to speak to use that - in the interim exit orders which adapt to variable position size - or exit orders which are added on partial fill.

the other item is this...
check that price is not near an stopPrice or exit orders and use a different mode -

so for example if prices has hit an order orderchange state or is near an order
use the complex event workflow - if far away dont use it etc

The fastest way to close a position is to move the limit order exits past the price

there is no need to send cancel orders and there is no need to send a close order...
OCO takes out the other on stoploss
it requires 1 action

caveats
if limit is partial filling in flight - will it reject the change request to move past price etc
limits qty sum working == position.qty unfilled

from ninjatrader8.

MicroTrendsTom avatar MicroTrendsTom commented on July 2, 2024

so on the status of this first Bug...?
Closed or still in test?
or close and re-open if we see deadlock resurface?

from ninjatrader8.

jmscraig avatar jmscraig commented on July 2, 2024

I think work on this bug has merged into the more active CancelPending thread. #6

That one is more active so why don't we close this one for now.

from ninjatrader8.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.