Poll khalid approval rating

Do you approve of the job khalid is doing as moderator?

  • Disapprove

    Votes: 3 15.8%
  • khalid sucks

    Votes: 16 84.2%

  • Total voters
    19
  • Poll closed .

Soygen

The Dirty Dozen For the Price of One
<Nazi Janitors>
28,329
43,180
Why, in your opinion, do you think reputable scientists have stopped pursuing it?
 

Himeo

Vyemm Raider
3,263
2,802
The skeptics in this thread are claiming that this new age hippie asshole scammed the government out of millions of dollars over 20 years.

The skeptics in this thread are claiming that this limp-wristed Bob Ross wannabe convinced dozens of serious army officers that THEY could scam the government without realizing they were scamming the government.

He's either a modern day Joe Smith, or we've left contact with "angels" to Ingo Swann and stoned high school drop outs.

 

Himeo

Vyemm Raider
3,263
2,802
Why, in your opinion, do you think reputable scientists have stopped pursuing it?

My opinion doesn't matter. What matters is the evidence showing this effect is real, and it deserves further study, and no scientist will touch this with a ten foot pole, unless it's under a top secret government CIA, KGB, DIA, alphabet agency program.
 

Soygen

The Dirty Dozen For the Price of One
<Nazi Janitors>
28,329
43,180
it deserves further study
This is your opinion, so that's why I'm asking "why?". If you think your opinion doesn't matter, then why are you constantly sharing it on this thread? When you say what I quoted above, who are you speaking for, other than yourself?
 

Himeo

Vyemm Raider
3,263
2,802
Further, the skeptics in this thread are claiming Ingo Swann scammed the government and the military officers he trained, SO WELL, that they continued researching PSI under dozens of new top secret projects while publicly denying it for decades.
 

Himeo

Vyemm Raider
3,263
2,802
This is your opinion, so that's why I'm asking "why?".

That is not my opinion, that is the opinion of statistician Jessica Utts.

http://www.ics.uci.edu/~jutts/UttsStatPsi.pdf

Abstract. Parapsychology, the laboratory study of psychic phenomena, has had its history interwoven with that of statistics. Many of the controversies in parapsychology have focused on statistical issues, and statistical models have played an integral role in the experimental work. Recently, parapsychologists have been using meta-analysis as a tool for synthesizing large bodies of work. This paper presents an overview of the use of statistics in parapsychology and offers a summary of the meta-analyses that have been conducted. It begins with some anecdotal information about the involvement of statistics and statisticians with the early history of parapsychology. Next, it is argued that most nonstatisticians do not appreciate the connection between power and "successful" replication of experimental effects. Returning to parapsychology, a particular experimental regime is examined by summarizing an extended debate over the interpretation of the results. A new set of experiments designed to resolve the debate is then reviewed. Finally, meta-analyses from several areas of parapsychology are summarized. It is concluded that the overall evidence indicates that there is an anomalous effect in need of an explanation.
 

Skanda

I'm Amod too!
6,662
4,506
Just because you don't like what they found doesn't mean it hasn't been explored by real scientists before.
 

Himeo

Vyemm Raider
3,263
2,802
Just because you don't like what they found doesn't mean it hasn't been explored by real scientists before.

I'm saying it literally hasn't. The "real scientists" who study this skeptically are quickly convinced that it's real. They are then ostracized, and lumped in with the "nutjobs".

Somehow, after a hundred years of empirical observation, no one will touch this.
 

Himeo

Vyemm Raider
3,263
2,802
From the article I linked in the Journal of Statistical Science 1991, vol 6, No. 4, page 363-403

SATISFYING THE SKEPTICS

Parapsychology is probably the only scientific discipline for which there is an organization of skeptics trying to discredit its work. The Committee for the Scientific Investigation of Claims of the Paranormal (CSICOP) was established in 1976 by philosopher Paul Kurtz and sociologist Marcello Truzzi when "Kurtz became convinced that the time was ripe for a more active crusade against parapsychology and other pseudo-scientists" (Pinch and Collins, 1984, page 527). Truzzi resigned from the organization the next year (as did Professor Diaconis) "because of what he saw as the growing danger of the committee's excessive negative zeal at the expense of responsible scholarship" (Collins and Pinch, 1982, page 84). In an advertising brochure for their publication The Skeptical Inquirer, CSICOP made clear its belief that paranormal phenomena are worthy of scientific attention only to the extent that scientists can fight the growing interest in them. Part of the text of the brochure read: "Why the sudden explosion of interest, even among some otherwise sensible people, in all sorts of paranormal 'happenings'?. . . Ten years ago, scientists started to fight back. They set up an organization-The Committee for the Scientific Investigation of Claims of the Paranormal."

During the six years that I have been working with parapsychologists, they have repeatedly expressed their frustration with the unwillingness of the skeptics to specify what would constitute acceptable evidence, or even to delineate criteria for an acceptable experiment. The Hyman and Honorton Joint Communiquk was seen as the first major step in that direction, especially since Hyman was the Chair of the Parapsychology Subcommittee of CSICOP.

Hyman and Honorton (1986) devoted eight pages to "Recommendations for Future Psi Experiments," carefully outlining details for how the experiments should be conducted and reported. Honorton and his colleagues then conducted several hundred trials using these specific criteria and found essentially the same effect sizes as in earlier work for both the overall effect and effects with moderator variables taken into account. I would expect Professor Hyman to be very interested in the results of these experiments he helped to create. While he did acknowledge that they "have produced intriguing results," it is both surprising and disappointing that he spent only a scant two paragraphs at the end of his discussion on these results.


Instead, Hyman seems to be proposing yet another set of requirements to be satisfied before parapsychology should be taken seriously. It is difficult to sort out what those requirements should be from his account: "[They should] specify, in advance, the complete sample space and the critical region. When they get to the point where they can specify this along with some boundary conditions and make some reasonable predictions, then they will have demonstrated something worthy of our attention."

Diaconis believes that psi experiments do not deserve serious attention unless they actively involve skeptics. Presumably, he is concerned with subject or experimenter fraud, or with improperly controlled experiments. There are numerous documented cases of fraud and trickery in purported psychic phenomena. Some of these were observed by Diaconis and reported in his article in Science. Such cases have mainly been revealed when investigators attempted to verify the claims of individual psychic practitioners in quasi-experimental or uncontrolled conditions. These instances have received considerable attention, probably because the claims are so sensational, the fraud is so easy to detect by a skilled observer and they are an easy target for skeptics looking for a way to discredit psychic phenomena. As noted by Hansen (1990), "Parapsychology has long been tainted by the fraudulent behavior of a few of those claiming psychic abilities" (page 25).

Control against deception by subjects in the laboratory has been discussed extensively in the parapsychological literature (see, e.g., Morris, 1986, and Hansen, 1990). Properly designed experiments should preclude the possibility of such fraud. Hyman and Honorton (1986, page 355) explicitly discussed precautions to be taken in the ganzfeld experiments, all of which were followed in the autoganzfeld experiments. Further the controlled laboratory experiments discussed in my paper usually used a large number of subjects, a situation that minimizes the possibility that the results were due to fraud on the part of a few subjects. As for the possibility of experimenter fraud, it is of course an issue in all areas of science. There have been a few such instances in parapsychology, but since parapsychologists tend to be aware of this possibility, they were generally detected and exposed by insiders in the field.

It is not clear whether or not Diaconis is suggesting that a magician or "qualified skeptic" needs to be 'present at all times during a laboratory experiment. 'I believe that it would be more productive for such consultation to occur during the design phase, and during the implementation of some pilot sessions. This is essentially what was done for the autoganzfeld experiments, in which Professor Hyman, a skeptic as well as an accomplished magician, participated in the specification of design criteria, and mentalists Bem and Kross observed experimental sessions. Bem is also a well-respected experimental psychologist.

While I believe that the skeptics, particularly some of the more knowledgeable members of CSICOP, have served a useful role in helping to improve experiments, their counter-advocacy stance is counterproductive. If they are truly interested in resolving the question of whether or not psi abilities exist, I would expect them to encourage evaluation and experimentation by unbiased, skilled experimenters. Instead, they seem to be trying to discourage such interest by providing a moving target of requirements that must be satisfied first.

Head of CSICOP tells the researches that their experiments are flawed, lists everything he thinks he'd need in an experiment to satisfy him. Then, he conducts the experiments the way he wanted, notes the PSI effect is still there. Then, instead of acknowledging, "Wow, this is not what I expected." He says "Uh, okay but I'm not going to believe you unless you spend billions of dollars and years of time on experiments to prove this. NAH NAH NAH NAH NAH I CAN'T HEAR YOU."

That's Hyman, the one guy who's looked into it and not changed his mind. That's the best you've got now, bub.
 

Void

Experiencer
<Gold Donor>
9,483
11,198
This guy gives a good summary of why it is far from proven much better than I ever could. Maybe you linked it already, or referenced it, or whatever. My bad for not reading every article and link you posted, I'll cop to that.

Remote Viewing, what should we think? - The Skeptics Society Forum

3rd post and lower specifically, but only 5 posts total so read the whole thing.

I'll admit right now, that is literally the extent of research I intend to do on this subject, and I'm sure you'll argue against all of it, but the rest of us read something like this and don't need to go any further to realize it is complete bullshit. You call that closed-minded, and laud yourself for being open to new ideas, but most of us need more than just vague results to "dive down the rabbit hole". As gets posted all the time here, extraordinary claims require extraordinary results, not "well that wasn't 100% negative, only 99.9%, so you're telling me there's a chance!" results.

As I said before, I firmly believe the type of person that is willing to grab hold of that tiny chance is someone that needs to "know" things that others don't, which makes them feel smarter and/or superior.
 

Skanda

I'm Amod too!
6,662
4,506
They are then ostracized, and lumped in with the "nutjobs".

Probably because they are. Being a scientist does not preclude someone from losing their mind over stupid shit. They become ostracized because they turn their back on real science in order to push their pet theories. Your own Jessica Utts there is a shining example. Her own partner called her methods into question.
 

Himeo

Vyemm Raider
3,263
2,802
This guy gives a good summary of why it is far from proven much better than I ever could. Maybe you linked it already, or referenced it, or whatever. My bad for not reading every article and link you posted, I'll cop to that.

Remote Viewing, what should we think? - The Skeptics Society Forum

3rd post and lower specifically, but only 5 posts total so read the whole thing.

I'll admit right now, that is literally the extent of research I intend to do on this subject, and I'm sure you'll argue against all of it, but the rest of us read something like this and don't need to go any further to realize it is complete bullshit. You call that closed-minded, and laud yourself for being open to new ideas, but most of us need more than just vague results to "dive down the rabbit hole". As gets posted all the time here, extraordinary claims require extraordinary results, not "well that wasn't 100% negative, only 99.9%, so you're telling me there's a chance!" results.

As I said before, I firmly believe the type of person that is willing to grab hold of that tiny chance is someone that needs to "know" things that others don't, which makes them feel smarter and/or superior.

I read the thread, and there are some very strong arguments made in that thread. High quality skepticism. And yet, again, it's proof of what I've been saying that skeptics are not skeptical enough. Most are contrarians who will make an assertion and then fail to follow up on the assertion.

Point by point.

Claim: Selective Reporting. "File-Drawer Problem" Or, "Yes, an effect is shown. But only because X number of negative studies were never published. If those are taken into account, the effect disappears."

Counter-Claim: You can account for a selective reporting bias using statistics. The PSI effect is still there.

TL;DR For the (current) 800 reported studies, 54,000 negative studies would have to have been conducted and not-reported to dismiss the effect. "Rosenthal suggests that an effect can be considered robust if the failsafe number is more than five times the observed number of studies." (4,000 vs 54,000)

In other words, the PSI effect is so big it would take 54,000 negative studies to erase the effect and statisticians agree it's unreasonable to assume more than 4,000 based on a sample of 800 reported studies.

http://www.deanradin.com/evidence/Radin1989.pdf

Page 1508-1509

4.2. The "Filedrawer" Problem

Although accounting for differences in assessed quality does not nullify the effect, it is well known in the behavioral and social sciences that nonsignificant studies are published less often than significant studies (this is called the "filedrawer" problem(2~'41 43)). If the number of nonsignificant studies in the filedrawer is large, this reporting bias may seriously inflate the effect size estimated in a recta-analysis. We explored several procedures for estimating the magnitude of this problem and to assess the possibility that the filedrawer problem can sufficiently explain the observed results.

The filedrawer hypothesis implicitly maintains that all or nearly all significant positive results are reported. If positive studies are not balanced by reports of studies having chance and negative outcomes, the empirical Z score distribution should show more than the expected proportion of scores in the positive tail beyond Z = 1.645. While no argument can be made that all negative effects are reported, it is interesting to note that the database contains 37 Z scores in the negative tail, where only 30 would be expected by chance. On the other hand, there are 152 scores in the positive tail, about five times as many as expected. The question is whether this excess represents a genuine deviation from the null hypothesis or a defect in reporting or editorial practices.

This question may be addressed by modeling based on the assumption that all significant positive results are reported. A four-parameter fit minimizing the chi-square goodness-of-fit statistic was applied to all observed data with Z >~ 1.645, using the exponential:

[Formula on page 1508] (1)

to simulate the effect of skew or kurtosis in producing the disproportionately long positive tail. This exponential is a probability distribution with the same mean and variance as the normal distribution, but with kurtosis = 3.0.

To begin, the null hypothesis of a (0, 1) normal distribution with no kurtosis was considered. To account for the excess in the positive tail, N= 585,000 filedrawer studies were required, and the chi-squared statistic remained far too large to indicate a reasonable fit (see Table I). This large N, in comparison with the 597 studies actually reported together with the poor goodness-of-fit statistic, suggests that the assumption of a (0, 1) normal distribution is inappropriate.

[Table on page 1509] (2)

Adding simulated kurtosis to a (0, 1) normal distribution by mixing exponential [Eq. (1)] and normal distributions in a 1:1 ratio reduced N by two orders of magnitude, and ratios of 2: 1, 3: 1, and 10:1 exponential to normal (E:N) yielded further small improvements. However, the chisquared statistic still indicated a poor fit to the empirical data. Applying the same mixture of exponential and normal distributions, but starting from the observed values of N = 597, mean Z score = 0.645, and standard deviation = 1.60t, with the constraint that the mean could only decrease from 0.645, resulted in much better fits to the data. Table I shows the results.

This procedure shows that the null hypothesis is unviable, even after allowing a huge filedrawer. The chi-square fit vastly improves with the addition of kurtosis, but only becomes a reasonably good fit when mean and standard deviation are allowed to approximate the empirical values. The filedrawer estimate from this model depends on a number of assumptions (e.g., the true distribution is generally normal, but has a disproportionately large positive tail). It suggests a total number of experimental studies on the order of 800, of which three-fourths have been formally reported.

A somewhat simpler modeling procedure was applied to the data assuming that all studies with significant Z scores in either the positive or negative tail are reported. The model is based on the normal distribution with a standard deviation = 1, and estimates the mean and N required to 1510 Radin and Nelson account for the 152 Z scores in the positive tail and 37 Z scores in the negative tail. This mean-shift model, which ignores the shape of the observed distribution, results in an N = 1,580 and a mean Z score = 0.34. These modeling efforts suggest that the number of unreported or unretrieved RNG studies falls in the range of 200 to 1,000. A remaining question is, how many filedrawer studies with an average null result would be required to reduce the effect to nonsignificance (i.e., p <0.05)? This "failsafe" quantity is 54,000--approximately 90 times the number of studies actually reported. Rosenthal suggests that an effect can be considered robust if the failsafe number is more than five times the observed number of studies. (21)

I'll get to the other claims point by point. They are interesting, thank you for taking the time to discuss this.
 

Himeo

Vyemm Raider
3,263
2,802
Probably because they are. Being a scientist does not preclude someone from losing their mind over stupid shit. They become ostracized because they turn their back on real science in order to push their pet theories. Your own Jessica Utts there is a shining example. Her own partner called her methods into question.

I'm more than happy to dig into the methods, numbers, results, and criticism.

Skeptics are not skeptical enough. More often they're just lazy.
 

Skanda

I'm Amod too!
6,662
4,506
You'll have to find someone else to "debate" your lunacy. You're on the same level as Lumie as far as I'm concerned, not worth the effort. I'm just here to watch the train wreck that is your life on these forums.
 

Himeo

Vyemm Raider
3,263
2,802
Claim: CIA/Military silence on a subject is no indicator either way as to the existence of this. They indiscriminately classify everything that has to do with secret espionage/weapons research by default.

Accepted. Military research and security classification prove nothing one way or the other.

Claim: If this remote viewing worked so well, they'd kept on doing it, but it seems from the evidence that rarely did they get anything useful out of this. But, as military research are prone to, they keep the longshot option going for a long while to see if anything materializes, since breakthroughs often come from suprising places.

Counter-Claim: They did get useful information from it.

https://www.cia.gov/library/readingroom/docs/CIA-RDP96-00789R002600360002-3.pdf

Pg. 26
Hostage Search Project

Task: Locate LTC Higgins

Info Provided: Basic Abduction Information

Data Generated:

Sources described transient holding areas, escape routes.

Specific village (Arab Salim) was initial holding area.

Specific building and holding location identified.


Comments:

Data consistent with later intelligence and assessments

Consistent with later data

One example of 5 provided in that document. Pages 26-36.

Second Counter-Claim: They (CIA) disclosed last week that they never stopped researching it and have, in fact, continued to expand the programs.

CIA Posts More Than 12 Million Pages of CREST Records Online — Central Intelligence Agency

More to come. Void posted a lot of information in those posts. Thanks again.
 

Himeo

Vyemm Raider
3,263
2,802
You'll have to find someone else to "debate" your lunacy. You're on the same level as Lumie as far as I'm concerned, not worth the effort. I'm just here to watch the train wreck that is your life on these forums.

I'm glad to have you. Let me know if you find an error in my logic or reasoning and make sure to throw in personal attacks.

Stay classy.
 

Skanda

I'm Amod too!
6,662
4,506
Tell you what, go hire a remove viewer and have them conjure up my location for you. If they get it right then I'll be a believer.

I won't be holding my breath.
 

Kiroy

Marine Biologist
<Bronze Donator>
34,692
100,219
Tell you what, go hire a remove viewer and have them conjure up my location for you. If they get it right then I'll be a believer.

I won't be holding my breath.

I tried this pages ago, thought of a number between 1-100 for about 5 minutes, enough time for him to respond with my number. He failed.

It was 55 by the way.
 

Himeo

Vyemm Raider
3,263
2,802
Easy claims (lazy skepticism) to dismiss.

Claim: If this was real they would not have shut down the programs (Project Star Gate).

Claim: They (CIA) are not hiding anything. They are just embarrassed that they spend millions for frauds and cannot get over it.

Counter-Claim: They (CIA) claimed they shut down programs (Project Star Gate). Last week they revealed they'd lied and in fact they expanded their research in secret and are (presumably) still researching it.

-

Claim: This can't be true because these people (are crazy) (have weird beliefs) (used to be a scientologist) (are a bunch of conspiracy nuts).

Counter-Claim: Ad hominem attacks. Let's focus on the empirical data and follow where it leads.

More to come. Thanks for posting Void.