logo.pngTradescapes and Algorithmic Signaling (White Paper)


Algorithmic Signaling in 2013

Trading financial instruments is a moving target. In the days when algorithmic signaling was more in its infancy, the results were often spectacular. With so many computer-driven signaling algorithms in the market today, reaping those same benefits is far more difficult. Tradescapes were created to give those using algorithmic signaling every possible edge in those challenging markets where computerized signaling and active trading are often synonymous.

Why is it So Much Harder?

Computerized trading platforms have grown to where they are commonplace. Trading costs are so low and the platforms are so easy to use, there is almost no barrier to entry or overhead issues for actively trading using computerized signaling. At the same time, the evolution of signaling systems is slow. True innovations are few, and the newer ones can be technically daunting. In other words, it is swiftly becoming a level playing field.

Revisiting the Unknown

There was a time one could, simply from trial and error, discover certain signaling paradigms and money management strategies, implement them, and have a successful trading business. The unknowns in those times might be how to signal trends using price breakouts, all-time high signals, sentiment, and the like, how to pyramid or reverse pyramid positions using volatility, and how to intelligently place entry and exit stops. The unknowns were more the mechanics of the trading process.

The questions evolve as the answers evolve. The new unknown poses questions less familiar. To what extent am I trading order and to what extent am I trading chaos? How do those change with different bar samplings? What does the entire trading landscape look like if I were to trade the order in price movements using a fully accurate signaler? What is possible and how good do I have to be with my signaling? At what absolute time horizon should I signal for maximum return? What median lag am I currently generating with my signal and what lag do I need in order to have the reward-pain I can live with in day-to-day trading? How accurately did I capture that order? For instruments that support it, is my signaler responsive enough to employ a variable position size or continuous leverage signaling? How can I have the greatest possible certainty that the signal I implement in live trading will be as robust and stable across time as possible? If I use an asymmetric signaler, say one that is slow to enter-fast to exit, how do I know that is the right choice? Of all the algorithms within my toolbox, which ones should I use? Do I even know the expectations of each since there is often no rhyme or reason as to why one works and one does not? And most importantly, if I have something that is currently working, what confidence can I have that it will continue to do so walking blindly forward?

Tradescapes were designed to answer the new questions of this modern era. In answering these definitely, one finds a new edge, the next step in this evolution of trading computerized signals.

The Painful Side of Signal Design and Analysis

Professional signal design is tedious, time-consuming, and often overwhelming in terms of the amount of experimental data one has to wade through. Let us say that we optimize a price breakout system for one specific financial instrument. We vary the entry and exit breakouts across the full range, creating a matrix of hundreds or thousands of breakout pairs. Let us say we plot these in a 3D surface and look for those settings that are both profitable and robust using a reward-pain or reward-risk metric. This kind of algorithmic optimization is now being done and there is commercial software that will readily do this for you. In all likelihood, there will be multiple zones of favorable response. Let's say you choose one. For the sake of argument, let's make that an entry breakout of 55 bars and an exit breakout of 20 bars, as fitting the original 'turtle' trading system.

Now let's say we want to do this design right, the way a professional signal designer would do this in 2013. The historical data becomes a well of in-sample and out-of-data with Monte-Carlo experiments of multiple designs and multiple blind walkforwards. One might do that 3D response surface optimization for each of those designs instead of pulling the single peak of the response from an optimizer. Let's say that the designer does a thousand such iterations, and the median breakouts that yield the best reward-pain across the thousand studies comes in at 40 and 15, each of these values with a wide variation or scatter. The designer doesn't like that variability, but now the 40 and 15 settings are locked in, assuming the performance at those settings merits going forward. If not, perhaps another instrument is explored. Or perhaps a different type of signaling system. Or perhaps a two or three component signaling system.

The signaler now adds the money management, stops, any form of forward or reverse pyramiding of positions, or perhaps even a variable-position size system.

The process just outlined may take days. If multiple instruments and signal algorithms are involved, it can easily take weeks. And when the designer puts the signal live, most of these unknowns will still be there. This next generation of critical questions will remain unanswered.

Signal Design - Days

Entirely too much time is squandered learning nothing in the current approach to algorithmic signal design. The reasons are simple enough. Many experiments result in insuffficient benefit, or perhaps no benefit at all. Many entities will fail to trade altogether at every possible permutation of signal agorithm parameters. Nothing may work. The process of identifying which entities can be effectively traded for more return than reward can take days, or even weeks if done properly.

In the current state of the science, we don't know, in advance, if any given entity is particularly amenable to the specific algorithmic signaling one may wish to implement. Simply because an entity cannot be signaled with one algorithm or strategy doesn't mean it will fail with others. The mysterious entity-signaler pairing can occupy an enormous amount of a signal designer's time, as well as patience.

Even when one has realized a seemingly successful signaler, there remain serious unanswered questions. If we assume we would like our signaler to capture most of the order, swings, or trending in the price movemts, we have no real idea if our signaler is effectively capturing 90% of the return that is possible from trading order, or just 10%. We may know how much the trading system improves the equity curve relative to a buy and hold but we don't know how much the signaler has exploited the order that can be traded.

We may have a good promise that we have captured the best reward-pain that our specific strategy or signaler can realize for our traded entity at the density with which it is sampled, but we don't know if a much better reward-pain is possible with any form of algorithmic enhancement. When that is found in practice, it is often an intuitive or trial and error process.

A given signaling algorithm will have intrinsic properties of lag and accuracy, with a trade density most easily understood as an average trade length. The decrease in latency or lag improves performance, at least when accuracy is maintained, but it is known that signal whipsaws seriously degrade accuracy. An experienced signal designer intuitively knows this tradeoff between lag and accuracy. Give an algorithm enough time to work it out, and it will always get the entry or exit right; it may simply be too late to do any good.

The biggest drawback of current signal design is that most of the questions that are answered are done in the context of the entity-signaler algorithm pair, at one set of paramater settings for the algorithm, and those will produce a signal with some measure of inaccuracy, and some scatter of lag at each of the entries and exits. One may go through days of work to get those answers, and they will be highly specific and applicable to one and only one trading scenario.

Tradescapes - Minutes

Frankly we found two standard practices to be self-defeating. We felt it was foolish to work with one equity curve at a time. The initial screening can readily be done by a good reward-pain metric. An effective 3D reponse surface would allow one to see the whole trading 'landscape' in one simple visualization. We call such a visualization: tradescapes.

Further, we found it unproductive to live within the constraints of real-world signalers that can map only a small portion of the overall trading landscape, and which do so with varying amounts of inaccuracy and with a measure of scatter in the lag. We found it far more useful to use advances in the EM (expectation modeling) science to sort the order from the chaos, and then signal the ordered component using a universal signaler designed to deliver the full accuracy that can be expected from trading this order. In so doing, we accomplish three critical items, each of which represent a leap in the science of mapping the trading landscape of a time series. First, we use the EM science to gain a universal time horizon reference that will be constant across all entities and signalers. You can think of it as an absolute fractal scale. Second, we introduce the lag uniformly with no scatter. Every entry and exit will be lagged at the precise count of bars specified in a lag fraction. This is referenced to this universal time horizon we call the EM length. A lag fraction is a direct measure of how "nimble" one's signaler is in terms of lag or signal latency. Third, these universal signals will have no whipsaws or other sources of inaccuracy in capturing the ordered price movements. They will represent the upper limit of what can be realistically achieved.

A tradescape is a collection of the results from 700 back tests arranged in a 3D surface where this universal measure of data utilization is set against lag. One can use a variety of reward-pain metrics to see where reward exceeds pain for example. At such a point, one can see how much lag can be tolerated. One can see this absolute time horizon that the signaler needs to offer in order to be effective. One can then pursue the trading edge. We look for that point where we see exceptional reward-pain. We see this sweet spot and we know what kind of lag we need to achieve. All is referenced for full accuracy. The designer knows the real-world algorithm must either be free of whipsaws, or those must be removed by some form of epsilon, band trading, or confirmation count signaling. Now it is known what is possible-one gets the pick the target point on the tradescape and then pursue it. There is a clearly-defined target.

Tradescapes use this universal signaler for doing most of what you would do in these tedious studies to answer the basic questions. For example, if one selects a sweet spot in a ten year aggregate tradescape, it may be a one-second process to see how stable that sweet spot was across individual periods in that ten year history, say two years at a time. One can know, without any of the pain of those MC experiments, what the trading landscape looked like in each of those periods. One effectively looks at thousands of equity plots in a few seconds in this new type of visualization, and a sense of robustness or its lack will emerge that will be far superior to anything you have previously used. You will know if you would have seen reward greater than pain, or simple profitability, at the sweet spot of the aggregate. If it is not there, you may be able to find a signaling length and lag that is consistent, and that can become a target. Or you may find you simply don't want to live with the lack of behavioral constancy. If that is the case, you can move on to a different entity. Or if you are especially good, you can see how the optimal time horizon shifts with time and seek to build an adaptive algorithm that fits the actual behavior of the entity.

If an entity cannot be traded at one data density, it may be successfully traded intraday. The universal signaler was designed to be fast. We have performed intraday tradescapes with a million bars of a 24-hr traded entity across a decade in order to find an optimal intraday time horizon. It may take some minutes, but it represents state of the art technology that is possible with this new paradigm.

And most importantly, you can gather all of the signaling intelligence you will need in mere minutes, possibly less, and without ever touching the vagaries of your real-world signalers.

Enter the Real World Signal Design

Knowing what one is targeting, it becomes much easier to engage the signal design process. To this point in this discussion, the signaling has been with this universal signaler that gives a complete picture of possibilities. The next step is to send your real-world signals to a tradescape analysis. The real-world signal is plotted atop the tradescape surface. It can be better. Tradescapes are fully accurate for trading order but at each point the time horizon and lag is fixed. Lag-reduction signaling methods, adaptive time horizons, and innovations that are effecitve in trading chaos, can result in an improvement: one can 'beat' the tradescape surface at this same median lag and estimated time horizon. Most basic signalers will capture only a portion of the performace of the tradescape. One will gernally realize only a fraction of what is possible in terms of reward-pain.

With the tradescape design scenario, you quickly learn what your signalers offer in terms of this absolute time horizon. You know approximately the measure of lag that is expected. With experience, you can know in a second whether your favored algorithm has a chance of trading a given entity for more reward than pain. There is no optimization, or at least it is minimally needed. You simply explore the real-world signals that can match this universal time horizon. You can then look at the lag and accuracy of each, and then the equity curve for the signaling. The process can literally take minutes.

The tradescape design scenario seeks to minimize or eliminate optimization and overfitting. While there is nothing one can do to prevent what can fundamentally change walking forward, tradescapes represent the best way we have found for walking forward in confidence. Random chance should be nowhere a factor. The misleading maxima of optimizations should be nowhere a part of the process. One will know, historically, exactly how much latitude one has in this signaler when it is put online, and with a reasonable measure of confidence, what can be expected walking forward in each of the market states represented in the historical period.

2013 and Beyond

We find it inefficient to work with 1 signal at a time when you can work with 700 that map the entire tradable landscape, and you are furnished a visualization that requires all of about one second for ten years of EOD data. We find it equally silly to waste time and energy on any specific signaling algorithm with suspect accuracy and limitations in terms of what lag can be effectively realized when we can use a universal signaler that allows us to see the entire trading response surface at full accuracy and at every lag that ranges from the anemic in sluggishness to the spectacular or even the impossibly great in terms of nimbleness. We find it painful to cycle through MC studies when this robustness across time can be answered in a far more certain and absolute sense. It makes no sense to look at a snapshot in time of one state of signal when one can take that snapshot of the entire trading landscape. We find it exceedingly wasteful to test a given entity against an array of signalers when most of the answers, and perhaps far more useful ones, come from this universal signaler. Once you know you can effectively trade an entity within the capabilities of your signaler, that is the time to begin the signal design process. With experience, that process can be a surprisingly swift one.