Too good to be true!Last modified:
It is hard to imagine that regular users of e-mail have not learned by now to ignore the unsolicited "too good to be true" offers that are flooding the internet, particularly those known to the public as Nigeria 419 Scam Letters. The following is a typical example of a Nigerian 419 letter from Korea:
Subject: Partnership.From: Mr. Wong Du
Seoul,South Korea.I will introduce myself I am Mr.Wong du a Banker working in a bank in south Korea Until now I am the account officer to most of the south Korea government accounts and I have since discovered that most of the account are dormant account with a lot of money in the account on further investigation I found out that one particular account belong to the former president of south Korean MR PARK CHUNG HEE,who ruled south Korean from 1963-1979 and this particular account has a deposit of $48m with no next of kin.
My proposal is that since I am the account officer and the money or the account is dormant and there is no next of kin obviously the account owner the former president of South Korea has died long time ago, that you should provide an account for the money to be transferred.
The money that is floating in the bank right now is $48m and this is what I want to transfer to your account for our mutual benefit.
Please if this is okay by you I will advice that you contact me through my direct email address.
Please this transaction should be kept confidential. For your assistance as the account owner we shall share the money on equal basis.
Your reply will be appreciated,
Thank you.
Wong Du
So, when we read about tragic cases of individuals who fell for such offers and swallowed the bait, we wonder how this could happen. How could anyone fail to see that such offers are too good to be true?
But the truth is that establishing that something is "too good to be true" is not necessarily as straightforward a task as the above example suggests. For, how would you explain that highly respected refereed journals are constantly publishing scientific results that are indeed ... "too good to be true". The following famous case is a case in point:
5. Red faces at Bell LabsJan Henrik Schön, a young researcher at Bell Laboratories in New Jersey, had five papers published in Nature and seven in the journal Science between 1998 and 2001, dealing with advanced aspects of electronics. The discoveries were abstruse, but he was seen by his peers as a rising star.
In 2002, a committee found that he had made up his results on at least 16 occasions, publicly embarrassing his colleagues, his employer and the editorial staffs of both the journals that accepted his results.
Schön, who by then was still only 32, said: "I have to admit that I made various mistakes in my scientific work, which I deeply regret." Nature also reported him as adding in a statement, "I truly believe that the reported scientific effects are real, exciting and worth working for." He would say no more.
Too good to be true
Tim Radford
The Guardian, Thursday 13 November 2003 02.22 GMT
Article history
http://www.guardian.co.uk/education/2003/nov/13/research.highereducation2I hasten to add that my point is not that the "too good to be true" factor as such is a function of fraud. Indeed, a "too good to be true" explanation, invention, or theory, can be the result of a genuine, sincere effort conducted by a dedicated, highly qualified researcher.
I know this from my own experience, as over the past 40 years, on a number of occasions, my investigations have yielded results that I myself found to be too good to be true. That is, I "discovered" results that seemed to be: " wow! so good" that... ultimately (obviously, after I subjected them to close scrutiny) proved to be "too good to be true" namely, incorrect, false, wrong or... what have you.
I am certain that many others can testify to similar experiences as this seems to be an integral part of the practice of SCIENCE. Technical and/or conceptual errors in the models we develop, and in the derivation of results, can yield spectacular results that can lead to the formulation of theories that are "too good to be true". The point of course is that we must always be on guard, in our own practice of science, to avoid falling into this pitfall and to make sure that when presented with results by fellow researches, we distinguish between:
- Case 1: A theory that at first glance seems to be "too good to be true", but turns out to be true after all.
- Case 2: A theory that at first glance seems to be "too good to be true", and in the end turns out to be just that: "too good to be true" that is, not true!
Over the years I have come across a not insignificant number of publication reporting on results falling under Case 2. But, the most (shall we say) difficult experiences have been those connected with the review of articles for refereed journals where I had to deal with authors proposing results and theories of the "too good to be true" type. I am well aware of the trauma that some scientists experience when forced to face the fact that their "beloved brain child" belongs to Case 2 rather than Case 1. I note this point because it is relevant to the issue I discuss on this page.
That said, here is some advice intended especially for young researchers:
- Be wary of theories that are promoted as novel, revolutionary, radically different, breakthroughs, and so on.
- Only a handful of scientists ever find themselves in a situation where their results fall in Case 1. Therefore, your default position should be that the recent exciting spectacular theory that you discovered may well fall under Case 2.
Therefore, triple-check your assumptions and derivations (of course, after you have conducted a thorough literature review and made sure that you are at home with the topic concerned), before you rush out to announce your discovery to the WORLD.
- Make sure that your discovery is not a case of the re-invention of the wheel!
- It is better to admit to a mistake than to extend an error!
More generally, I strongly recommend that young researchers give some thought to Bob Bedow's Ten Natural Laws of Operations Analysis.
Illustrative Example
The above comments are a preamble to a short discussion in which I provide yet more justification for my campaign to contain the spread of Info-Gap Decision Theory in Australia. The point I want to make here is that Info-Gap Decision Theory is a classic example of a theory that is "too good to be true".
So, let us begin by recalling that the standard Info-gap rhetoric that forms part of every publication on Info-Gap, makes either some, or all, of the following claims about this theory:
- Novel, revolutionary, radically different.
- Designed specifically for seeking robust decisions in the face of severe uncertainty.
- Non-probabilistic and likelihood-free in nature.
- Is singularly well suited to handle unbounded regions of uncertainty.
Impressive, isn't it?
However, once you cut your way through the layers of rhetoric in these publications, you immediately discover that all that these purported attributes of Info-Gap add up to is the familiar "too good to be true" feature. My objective is then to illustrate why anyone with the most basic knowledge not only of decision theory but of statistics, finance, risk analysis etc. should have realized that Info-Gap is a classic case of the "too good to be true" phenomenon. The trouble is, however, that this fact is not all that easy to uncover as it lies buried under layers of fog.
So, to get down to business let us use the following model as an illustration of the methodology put forward by Info-Gap. Assume that your company is about to make an extremely important investment decision and for this purpose you need to take into account the future prices of
- Gold
- Oil
on January 1, 2020.
What you obviously also need to assume is that the future prices of these commodities are subject to severe uncertainty. Hence, all you can state at present is a rough estimate of these future prices which is a sort of wild guess of the true values of these prices.
Schematically, the picture is this:
- P = range of values that the two future prices can take.
- p' = wild guess of the true values of the future prices.
- The actual future prices can be anywhere in P.
- The model is non-probabilistic and likelihood-free so there is no reason to believe that the actual values of the future prices are more/less likely to be in any one particular neighborhood of P.
- Specifically, there is no reason to believe that the actual values of the future prices are more/less likely to be in the neighborhood of p' than in any other neighborhood of P.
So far so good.
Now, Info-Gap's approach to the problem can be described by the following arguments:
- Given that the uncertainty in the future prices of these commodities is severe, the estimate p' is a poor indication of their true values and is likely to be substantially wrong. Therefore, the investment decision should not be based only on p'.
- Rather it should be based on a robustness analysis in the immediate neighborhood of p'
My position is this: I completely concur with the first argument but reject the second outright.
The picture is this:
- U = the region around the estimate p' where Info-Gap's robustness analysis is conducted.
- This region is typically much smaller than P.
- The results generated by Info-Gap's robustness analysis are not affected by the value of the prices outside U.
Info-Gap's rationale for insisting that a neighborhood of p', rather than only p', be examined is clear and justified: since p' is a poor estimate, the true value of p may be distant from p'. So, it is imperative to investigate the neighborhood of p' and not only the single point estimate p'.
But ...
The obvious point to note here, as brought out clearly by the picture, is that the neighborhood around p' that is in fact investigated by Info-Gap's robustness analysis, namely U, is far smaller than P. Therefore, the analysis in this area is not necessarily representative of the performance of the decisions over P. In other words, U does not represent the severity of the uncertainty that is represented by P.
So, you need not be an expert in decision theory to figure out that by evaluating the performance of decisions on U rather than on P, all we come up with — given the severe uncertainty — is a completely distorted evaluation of these decisions.
In fact, precisely the same rationale that correctly rejects the idea that a single point estimate p' suffices to yield a proper evaluation of decisions under these conditions, also rejects the proposition that the investigation of a small neighborhood around p' can do the job.
That is, under severe uncertainty, neither p' nor U adequately represents P for the purpose of evaluating the robustness of the decisions on P. Indeed, if P is much larger than U then both technically and conceptually U constitutes no more than a "point estimate" of P.
So let me spell it out load and clear: what we have accomplished by means of such a local analysis around the poor estimate p' is the precise opposite of what we set out to accomplish. To wit:
If P is the uncertainty region under consideration and p' is a wild guess of the true value of p, then investigating the performance of decisions on U rather than on P amounts to ... effectively ignoring the severe uncertainty, rather than tackling it.Of course, you may counter that this need not be so serious, for we can simply increase the neighborhood U. What prevents us from increasing the size of U as desired and/or required?
The trouble is that this is not how Info-Gap works. The size of the neighborhood U is not a parameter that can be freely varied to take account of the requirements presented by the problem in question. Within the framework of Info-Gap decision theory, the size of U is determined entirely according to a certain formula which — under normal conditions — is not affected by the size of P.
So, you cannot increase the size of U at will, say because you have found out that it is significantly smaller than P and therefore fails to represent P adequately. You are bound by the formula deployed by Info-Gap decision theory to determine the size of U. But more than this, as attested by Ben-Haim (the Father of Info-Gap decision theory), P is most commonly unbounded. So, this being the case, U will typically be infinitesimally smaller than P, and this fact will not change even if you somewhat increase the size of U.
In short, as dictated by Info-gap's basic tenets, methodologically, we must assume that U is inherently much smaller than P. This translates directly into the fact that the robustness analysis conducted by Info-Gap decision theory is not affected in the slightest by the severity of the uncertainty of the parameter of interest. So, in the case of the investment problem under consideration, you would be instructed to chose an investment strategy that is based on a robustness analysis that is utterly unaffected by the severe uncertainty in the future prices of oil and gold, as measured by the size of P. To highlight this feature of Info-Gap decision theory I refer to the part of P that is not covered by U as Info-Gap's No Man's Land.
The picture is then this:
- Info-gap's robustness analysis is not affected by the performance of decisions relative to prices that are outside U.
- Therefore, the results it generates, including the decision that it deems most robust, are completely invariant with the performance of the decisions in the No Man's Land.
- Typically, the No Man's Land is much larger than U.
To appreciate what goes on here, let us discard the No Man's Land, and focus on the region that actually affects Info-Gap's robustness analysis, namely U. The picture is this:
Given Severe Uncertainty Info-Gap's Robustness Analysis The conclusion to be drawn then is that, the secret weapon that Info-Gap puts at your disposal to formulate an investment strategy that aims to take account of the fact that severe uncertainty obtains with regard to the future prices of gold and oil is simply this: ... ignore the severity of the uncertainty in these prices!
So, if you fall for Info-Gap's rhetoric (i.e. do not realize that its promises are “too good to be true”) and decide to use it for the purpose of formulating an investment strategy that is robust against the severe uncertainty in the future prices of gold and oil, you would in fact end up with a strategy that... takes no account whatsoever of the severity of the uncertainty in these prices!
And to extend Nassim Taleb's metaphor of Black Swans, not only is Info-Gap decision theory unable to handle genuine Black Swans (extreme events), it cannot even hanlde simple, ordinary, white swans!
The most intriguing (amusing) aspect of this saga is that this ingenious idea, to manage severe uncertainty by simply ignoring the severity of the uncertainty altogether, is described in the Info-Gap literature as its claim to fame. Of course not in so many word. Rather, the feature that is being hailed as that which gives Info-gap its edge is its ability to handle unbounded regions of uncertainty! The fact that the postulated unbounded region of uncertainty comes to nought because Info-Gap's robustness analysis is conducted only in the neighborhood of the estimate p', has gone unnoticed. So, the fact that Info-Gap's robustness analysis does not care a straw about what happens in P because, it ignores what happens outside U anyhow, in effect means that it does not matter in the slightest whether or not P happens to be unbounded.
All this is lost on Info-Gap scholars, let alone on unwary readers of this literature!
In a nutshell, Info-Gap's idea of tackling severe uncertainty is to pretend that the uncertainty is not severe and to restrict the analysis to a neighborhood of the poor estimate p'.
Remark:
This fact is however lost in Info-Gap's bombastic rhetoric which describes it as a methodology that is singularly suited to handle decision problems subject to severe uncertainty.So, if you are not at home with the difficulties presented by severe uncertainty you may be taken in by the rhetoric and you may not be able to detect that Info-gap's prescription for managing severe uncertainty is to ... disregard it altogether!
Indeed, without some expertise in the field you may be unable to detect this fact in the articles that presumably illustrate by example Info-Gap's treatment of uncertainty. Still the question that you must ask yourself — a question that I suspect escaped the referees who recommended acceptance of these articles — takes us back to the "too good to be true" issue.
The question that you must ask yourself is the following: how is it that a decision theory manages against all odds — the severest uncertainty imaginable, a poor estimate, a simple technique — to identify decisions that are robust against severe uncertainty!
Isn't this a case of "too good to be true" par excellence?!
Fog, Spin, and Rhetoric
My point is then that to establish that a theory, idea, proposal are "too good to be true" may not be easy due to the thick fog covering them. By this I do not mean fog of the misty variety...such as the spectacular Fog in Dubai, as shown in some of the photos at http://www.flickr.com/photos/our_dubai_property_investment/71214523/.
For our purpose, consider this beautiful fog from Wikipedia
by Mila ZinkovaAnd I do not mean the kind of fog described in the following quote:
Lately, the phrase "supply chain management" has been coming up in almost every business conversation I have. It's catchy, it sounds important, and it's something that you can impress your vendors and customers with if your company professes to embrace it as a business initiative. In practice, supply chain management can streamline operations, improve communication with business partners, increase efficiency, create more sales opportunities, and the list goes on. Many dealers have their sights set on these goals, but to achieve them those at the corporate helm must define their company's own supply chain boundaries and vision, know how to share the objectives with employees, and be capable of measuring results. This is easier said than done.In the past 15 years I don't think I have ever met anyone in business--in any industry--who has not at least heard of supply chain management. On the other hand, I've met very few people who can concisely define exactly what it is and outline its parameters. In some ways, supply chain management has morphed into a foggy catchall concept that can be used to describe just about any initiative to improve a company's operations and/or logistics.
Clearing the fog: to harness the power of supply chain management, you need a clear vision of its scope and boundaries.
By Clift, Lisa
Publication: Prosales
Date: Sunday, January 1 2006
Full The article is available online.What is uppermost in my mind is the kind of fog that is created by spin and rhetoric, that is language and discourse that aim to cover up, embellish, explain away etc, lack of expertise, lack of clarity in exposition, confusion, ulterior motives and so on.
This is described most eloquently by Peter Andras and Bruce Charlton (2002) in their note Hype and Spin in the NHS dealing with Public Health (external link to NHS is mine):
PEOPLE increasingly expect hype and spin to be a feature of almost all the publicly available information generated by government, corporations, and institutions in general — and the NHS is fully implicated in this phenomenon. The result is that, despite the unprecedented volume and accessibility of information, an understanding of the realities of our society seems as remote as ever it was. Propaganda and factual information are presented in an identical fashion, and only a trained economist can tell whether the latest 'increase' in NHS funding is a genuine injection of resources or merely an example of creative accounting.
Peter Andras and Bruce Charlton
The British Journal of General Practice
52(479), 2002 (p. 520)
In the end, if attempted reforms continue to fall short, the unified structure of the NHS will presumably break-up in favour of a variety of smaller, simpler and more trustworthy alternative health care systems. Hype and spin are seductive short-term solutions, but will ultimately prove fatal to institutions that rely on public confidence.
ibid, p. 521To sum it all up, lifting the blanket of fog, spin, and empty rhetoric covering a theory is an essential step in exposing it to be "too good to be true".
And as you can no doubt see, this can be a highly rewarding activity!
Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.