Journal of Nonlocality and Remote Mental Interactions Volume II Number 2 July 2003
6. Why the efficacy of consciousness cannot be limited to the mind, by Titus Rivas
Received: September 22, 2003
In January 2003 Hein van Dongen and myself published our paper Exit Epiphenomenalism (www.emergentmind.org/rivas-vandongen.htm ) the updated English translation of a Spanish original paper in which we show that the notion of an irreducible and at the same time epiphenomenal consciousness is incoherent. The acknowledgement of the irreducibility of consciousness logically entails the acknowledgement of the efficacy of consciousness. This is because the only justifiable claim of the reality of consciousness must be based upon knowledge of consciousness, which in its turn can only exist if consciousness has an impact upon our cognition.
After the publication of our paper, several readers asked whether this logical consequence of the acceptance of consciousness could be limited to an impact upon the mind.
If so, a partial epiphenomenalism could be sustained and therefore physicalism regarding the physical world as well. In this short paper I will show why the logical implication of (ultimate, i.e. direct or indirect) conscious efficacy cannot be limited to 'internal' psychopsychical causation and must also be extended to psychogenic causation of physical events.
Our paper Exit Epiphenomenalism conclusively established that consciousness must have an impact upon cognitive processes in the mind. If it wouldn’t have such an impact, we wouldn’t be able to know that there are subjective experiences. Apparently, several theorists seem to find the acceptance of such a psychopsychical efficacy of consciousness less problematic than the acceptance of a notion of general efficacy. In our paper we had rejected the position of parallelism, as it implies that the physical world never has any impact upon cognition and therefore becomes entirely unknowable for the mind. Therefore, the position that readers of our paper have proposed is a combination of psychopsychical efficacy with an impact of certain physical processes upon the mind and intraphysical physicalism for the physical world.
There are two arguments against the position that the psychogenic efficacy of consciousness would be limited to the realm of the mind itself. The first of these has already been mentioned in the paper Exit Epiphenomenalism which defends a general efficacy of consciousness (i.e. not limited to the mind).
Intraphysical physicalism makes it impossible to specifically talk and write about consciousness
If consciousness has no (direct or indirect) impact upon physical processes, it becomes impossible to specifically talk or write about consciousness. In other words, we would have personal reasons to believe in the existence of subjective experiences but we would be unable to express those reasons psychomotorically by speach or writing. However, the position of an efficacy of consciousness that would be limited to psychopsychical causation implicitly claims to be a position that can be expressed in words, as otherwise it could not exist within the realm of interpersonal or collective philosophical debates. Therefore, the position of limited efficacy is an incoherent position.
The exclusively somatogenic causation of consciousness is inherently incompatible with psychopsychical efficacy.
Implicitly, the notion of an efficacy of consciousness which would be limited to psychopsychical causation is part of a theory according to which consciousness would be a product of neural processes, i.e. it would be caused by the brain.
At the same time, consciousness would have an impact upon the mind, but not upon the brain. This implies that some processes of the mind would be caused by consciousness, while they would at the same time be the product of neural processes. The problem is that during the mental conceptualisation of consciousness, the supposed neural processes (that would ‘support’ consciousness) or “substrates” cannot themselves be based upon any (direct or indirect) impact of consciousness.
They can never follow the cognitive direction of specific considerations that are part of this specific (psychopsychical) conceptualisation of consciousness. There seem to be two possible escapes.
- Either this psychopsychical process is not specifically supported by specific (computational) neural processes, which goes against the notion of the content of consciousness as something which is always specifically caused by the brain. The problem with this escape is that it is very strange that exclusively psychopsychical conceptualisation would not be specifically supported by neural (computational) processing, whereas all other psychological processes would. Within intraphysical physicalism, there is no way for the brain to notice whether a specifically psychopsychical process takes place, as registration of such a process would entail a psychogenic effect upon brain processes during such a registration (and any psychogenic effect on the physical world is incompatible with intraphysical physicalism). The brain would never ‘know’ when it should specifically (computationally) support mental processes and when it shouldn’t.
- Or consciousness is in fact never influenced by neural processing. Within the theory I discuss here, this is impossible, as it would imply a type of parallellism which entails that we can have no reason to believe in a physical world, whereas the existence of a physical world is a precondition for intraphysical physicalism. Only in the case of ontological idealism is it possible to deny any type of psychophysical and physicopsychical interaction.
The (ultimate) efficacy of consciousness, both intrapsychically and psychophysically is logically entailed by the recognition of the reality of consciousness defined as irreducible, qualitative and subjective awareness.
6533 RT Nijmegen
Titus Rivas, April 2003
In this paper or Exit Epiphenomenalism we haven't explicitly mentioned all recent versions of epiphenomenalism, such as that of Jaegwon Kim because the apects we've focused on aren't specific for these recent versions. They belong to any form of epiphenomenalism, i.e. to the very essence of the epiphenomenalist position.
For further reading, see http://members.lycos.nl/Kritisch/limitedefficacy.html
6. RE: Germine's "Induction of a Stereotactic Auditory Hallucination by an Extremely Low Frequency (ELF) Electromagnetic Field"
From: Lian Sidorov
September 4, 2003
Mark Germine's experiment is particularly important in light of another study by Dr. Elizabeth Rauscher and William van Bise, which showed that ELF magnetic fields intersecting within the cranial volume of blindfolded subjects could produce visual hallucinations such as circles, ellipses and triangles. In that study (reported by Robert Becker in his 1990 book "Cross Currents", pp. 105) magnetic fields were generated by two coils of wire pulsing at slightly different frequencies such that a third, extremely low "beat" frequency was created at the intersection point, corresponding to the subject's head. The type of image that was evoked could be changed by varying the frequency of one of the coils. However, the coils' magnetic field strength was so small that no nerve impulses could have been triggered by them.
Becker's interpretation is that such ELF currents could interact with the brain via semi-conducting perineural elements and their associated DC current. In effect, we believe this might modulate the "background" electrical activity of the brain or, in Gariaev's genetic terminology, the context which is ultimately responsible for the expression of recognizable patterns. In this scenario, one can ask whether such very weak background currents might subtly alter the topology or energy landscape of the brain and thus play a role in stabilizing/destabilizing neural attractor basins which are ultimately responsible for these visual and auditory hallucinations. Alternatively, we could follow Becker's suggestion that consciousness is more closely associated with the body's weak DC "morphogenetic field" than with digital impulses in the brain and hence that the ELF stimulation acted directly on this interface, or organ of conscious perception. If that is so, we are not far from Pitkanen's magnetic sensory canvas hypothesis.
5. Induction of a Stereotactic Auditory Hallucination by an Extremely Low Frequency (ELF) Electromagnetic Field
From: Mark Germine
Received: September 04, 2003
About three years ago I suggested that the Taos Hum, a humming noise heard by certain individuals in the area of Taos, New Mexico, and elsewhere, might be the result of the effect of extremely low frequency (ELF) electromagnetic radiation on the brain (http://iesk.et.uni-magdeburg.de/~blumsche/M112.html). In the course of investigating this possibility I constructed a crude apparatus for generating ELF radiation and tested the effect of such radiation on myself.
The apparatus consisted of a Fender Princeton Chorus (trademark) amplifier connected to an electromagnetic coil. The amplifier was adjusted to produce a moderate volume, high frequency noise oscillating at eight cycles per second (8 Hertz). A copper coil was constructed of medium gauge copper wire wrapped several times around the head in a mid-coronal orientation such that the output of the amplifier was directed through the coil rather than to the speaker on the amplifier.
When oriented in a slightly oblique hat-band orientation in full contact with the head an auditory hum was heard within the cranium in a tabular distribution that corresponded to the orientation of the coil. The hum was heard throughout the intra-cranial distribution defined by the plane of the copper coil. I have never before or since experienced an auditory hallucination.
Although the experiment was rather crude, I am reporting it because of the apparent induction of a stereotactic auditory hallucination by an ELF electromagnetic field. The extent to which direct electrical stimulation of the brain may have been involved is unknown. Whether the apparent stereotactic distribution of the auditory hallucination was real or apparent is also unknown. The experiment does, however, raise interesting questions about the nature of hallucinations in terms of their relationship to the electromagnetic field of the brain and the brain’s alpha rhythm, as well as the possibility of remote interactions occurring between the brain and ELF electromagnetic wave generators.
416 Jackson Street
Yreka, CA USA 96097
4. Continuous Creation Cosmology and the Constancy of the Speed of Light
From: Duane Elgin
To explore and to expand the foundations of physics, I would like to propose a new kind of continuous creation cosmology—one that differentiates the “continuous creation of the entire cosmos as a single system” from a well-established theory describing the “continual creation of atomic matter.” The latter theory was developed by astrophysicist Fred Hoyle and describes a steady-state cosmos where atoms are generated at a rate just sufficient to offset the dispersion produced by the expansion of the universe, thereby producing a cosmos with a relatively even density of matter throughout. In contrast, the theory of continuous creation of the cosmos refers to a process whereby the totality of the universe is continuously regenerated at a rate that is revealed by the constancy of the speed of light.
Although relativity theory and modern physics are founded upon Einstein’s insight that the speed of light is a limiting factor in our universe, we still do not have a clear explanation for this fact. Nonetheless, physicists have taken this as a given and constructed a powerful theory of the universe based upon it. But why is the speed of light a fundamental constant in the universe? Described below is a hypothesis for explaining the puzzling nature of the constancy of the speed of light.
This inquiry is premised on the hypothesis that the constancy of the speed of light at the local scale is a result of a larger process occurring at the cosmic scale; namely, the precise consistency of manifestation of our entire cosmos as a single, standing wave embracing both the fabric of space-time and matter-energy. In other words, I hypothesize that our cosmos is a system that is being continuously “regenerated” (as a standing wave of matter-energy) and “updated” (as a seamless and flowing fabric of space-time) at each instant. It is impossible to measure this rate “objectively” (as an external observer) because everyone and all measuring instruments are inside the system that is attempting to be observed. The speed of emergence or the pace of arising of the overall system is impossible to determine “objectively” (as an object from which we are apart) because we cannot stand outside the cosmos in its process of becoming itself and measure “it” coming into existence. Because we are inside and integral to this flow of continuous regeneration, we can only make inferences regarding the pace at which this flow is occurring. This unyielding fact points toward a fundamental attribute of the cosmos—the constancy of the speed of light.
Continuous creation cosmology hypothesizes that the constancy of the speed of light is a result of the precise consistency with which the fabric of reality is dynamically woven together. In other words, the constancy of the speed of light is produced by, and is a result of, the pervasive evenness with which the overall cosmos is being generated as a unified system. In turn, the precise consistency of continuous creation at the cosmic scale has been interpreted as the constancy of the speed of light at the local scale.
Continuous-creation theory suggests a straightforward reason for the physical compression, time dilation, and increase in mass predicted by relativity theory as an object approaches light speed. Assuming the cosmos is being generated at a pace revealed by the constancy of the speed of light, then when an “object” (as a flow-through, standing wave) approaches the speed of light, it will necessarily run into itself in the process of becoming itself, and this will produce a literal compression of its dynamic structure in its direction of motion. No object (as a standing wave) can move ahead of the flow that continuously regenerates both the object and the surrounding cosmos. As an object (or flow-through subsystem of the larger standing-wave cosmos) tries to move ahead of the pace at which it is becoming manifest, it will progressively run into itself becoming itself—a self-limiting process that produces the increasing physical compression, time dilation, and mass predicted by relativity theory. Assuming our cosmos is a flow-through system that is being continuously regenerated at each moment, then this should logically produce a “limiting condition” (or boundary or threshold) as any “object” approaches the rate of regeneration for the overall system. In other words, “things” within the dynamically arising system cannot move ahead of (or outside of) the system within which they are continuously arising or coming into manifest existence.
Assuming the speed of light is a derivative or by-product of a more pervasive dynamic that reveals the base weaving rate for the manifestation of the entire cosmos, then this suggests that our dynamically arising cosmic system is bounded by two extremes, both involving the speed of light and the flow of regeneration of the cosmos. These two extremes seem to define the boundary conditions for existence in a relativistic universe—allowing objects to move freely relative to one another as long as they stay within the boundary conditions of the ever-regenerating system of which they are an integral part.
At the “slow end of the spectrum,” if an object is being manifest at just the speed of light, then this is a “near zero condition,” much like a waterlogged tree trunk that is barely visible on the surface of the water, material manifestation would be barely evident, as matter would be barely emerging into this world. This is the base-weaving rate, or the fundamental closing/converging speed of the cosmos as a self-regenerating system.
At the “fast end of the spectrum,” any “object” (or standing wave) can move up to the speed of light within the manifesting system before it will be pressing at the limits/threshold of its ability to overcome itself becoming itself.
Given the boundary conditions describing: 1) the base weaving rate for the overall cosmos (inferred from the constancy of the speed of light), and 2) the limiting condition for objects moving relativistically within the ever-regenerating cosmos (up to the speed of light), we can then infer from that the essentials of Einstein’s equation, E = MC2: The maximum energy that any object “M” can achieve is equal to the mass of that object multiplied first by the speed at which it is coming into existence (“C”) and then multiplied again by the speed of that object as it presses against the limits of the continually arising cosmos, which is up to the speed of light (“C”).
Therefore, E = M x C x C
Because the theory of continuous creation views the constancy of the speed of light as a by-product of the consistency of cosmic-scale regeneration, it means that, as a derivative of a deeper dynamic, there is no reason that the speed of light cannot change over time. The only requirement is that it be consistent across the entire field of cosmic space-time so as to keep everything in its orderly place as it comes into existence moment by moment.
Summarizing, if the cosmos is being regenerated “at the speed of light,” then this implies that: First, there is an immense amount of flow-through occurring as the entire cosmos is being continuously sustained and regenerated as a single system. Everything (of matter-energy) and everyplace (of space-time) “carries” the energy of this light-speed manifestation—it is inherent and pervasive. Second, no “thing” can go faster than the speed of regeneration of the overall cosmic system within which it itself exists as a dynamically arising entity. Combining the concept of embedded or carried energy (the speed of cosmic arising or C) with the further understanding that the boundary condition for movement is again the speed of light within a self-regeneration system, then it follows that the maximum energy any thing can have (E) is equal to its mass multiplied by its inherent or carried light-speed energy times its boundary speed, again the speed of light. Therefore, E = MC^2.
Duane Elgin © 1988 - 2003
3. Re: "Are memories really stored in the brain?" by Nicholas Prince
From: M. Pitkanen
The article contains ideas which resemble those underlying TGD based model of memory.
a) Memories are in (geometric) past and communicated to the (geometric) now so that no memory storage is needed in (geometric) now.
b) As far as communication method is considered, the idea is taken from aerial circuit used to receive radiowaves. When the frequency of incoming radiowave equals to the tunable resonance frequency of aerial circuit, resonance occurs and the amplitude modulated signal is received.
Now the frequency is formally replaced with Hamiltonian of subsystems, and one can say that the brain of the past sends, not radiowave, but its Hamiltonian to the future. The resonance frequency of aerial is varied to get tuned to the sender of radiowave <-->brain varies its subsystem Hamiltonian in order to achieve memory recall.
What a signal from past carrying the information about Hamiltonian means, is a problematic question. Obviously this kind of mechanism is not possible in standard physics context.
Some comments about the basic assumptions of the model.
1. Macroscopic (and macrotemporal) quantum coherence at the level of brain.
This assumption is common to all quantum theories of consciousness and the problem is that standard physics does not support the assumption.
2. The Hamiltonian describing the dynamics of brain subsystem responsible for memories varies in time because brain alters it.
The assumption makes sense if one assumes that Hamiltonian dynamics is local phenomenological description of subsystem. Hamiltonian of say quantum electrodynamics of TOE (if it allows Hamiltonian description at all) cannot be in question since it is fixed completely by the basic physics.
The philosophical problems relate to the differences between two concepts of time: subjective time as irreversible time associated with dissipative dynamics with subjective future not existing, and geometric time as spacetime coordinate and involved with unitary and non-dissipative quantum dynamics and with geometric future and past co-existing.
In the statement "brain changes the Hamiltonian" "chance" is understood in the sense of subjective time; in the statement "Hamiltonian generates unitary time development" time is understood as geometric time. I am not sure if the two times are now identified. Author does not discuss quantum measurement problem: that is leaves open whether quantum jumps occur or not.
3. Subsystem Hamiltonian is assumed to characterize the STRUCTURE of subsystem: structure in this sense does not depend on the state of system at all. Hamiltonian characterizes also the possible memories of subsystem. Memories are here understood as episodal memories, re-experiences during some time interval. Experience is identified as being characterized completely by the unitary time development of the system and by the state at initial time.
4. The notion of focusing on subsystem Hamiltonian as something more or less equivalent to memory recall is introduced. The focusing is based on generalization of the aerial mechanism: frequency being replaced with Hamiltonian. Focucing gives rise to a new Hamiltonian such that it corresponds to the Hamiltonian of subsystem at some moment of past. This automatically induces perturbation to the Hamiltonian inducing the time development of the recent state into the last state corresponding to the earlier time evolution. If I understand correctly, the earlier experience would be re-experienced that is episodal memories.
A short comparison with my own TGD approach is perhaps in order since also TGD approach (see my homepage > >http://www.physics.helsinki.fi/~matpitka/ and articles at this homepage http://www.emergentmind.org/journal.htm makes memory storage un-necessary and involves communications between geometric past and future.
1. In TGD Hamiltonian description is a phenomenologial description making sense for subsystems as long as they stay in state of macroscopic/-temporal quantum coherence, which is made possible by spin glass degeneracy of the dynamics of the spacetime surfaces implying huge degeneracy of quantum bound states not visible in standard physics based measurements in turn implying much longer decoherence times than predicted by standard physics.
2. Subjective time and geometric time are different concepts in TGD, and each quantum jump in principle replaces the Hamiltonian of the subsystem with a new one. The so called zero modes, which can be said to be classical, non-quantum fluctuating degrees of freedom in the sense that a localization occurs in every quantum jump for them, are external parameters of the Hamiltonian. The variation of subsystem Hamiltonian corresponds to the variation of these parameters and could be indeed induced by external signals so that system could quite well modify its subsystem Hamiltonian. Even more, during macro
3. The author identifies conscious experience with time development of quantum state.
I identify self as a sequence of quantum jumps with conscious experience defined by statistical average over increments of various qnumbers and zero modes over qjumps. Self mathematizes the phenomenological notion of observer and can be regarded as a subsystem able to avoid generation of bound state quantum entanglement during the sequence of quantum jumps (qjump has a complex anatomy: unitary process+state function reduction+ state preparation process) and thus behaving autonomously and having quantum identity. Everything is conscious but consciousness can be lost by the generation of bound state quantum entanglement. For subsystems in a state of macrotemporal quantum coherence (in state of "oneness") one could indeed in good approximation identify self as a unitary time development of state but not otherwise.
4. The pieces of new physics in TGD approach
a) Topological field quantization means a new view about classical fields. Spacetime surface can be seen as a generalized Feynmann diagram with lines thicknened to four-manifolds. One can assign to any system its magnetic body consisting of a complex magnetic flux tubes structure represented topologically as spacetime sheets and having astrophysical size. Classical radiation fields are quantized to "topological light rays" ideal for precisely targeted classical communication and for time like quantum entanglement.
b) The generalization of the subsystem concept is forced by the notion of manysheeted spacetime. Two unentangled subsystems (selves) can have subsystems/subselves/mental images which can entangle. This entanglement gives rise to a telepathic sharing of the mental images resulting in the fusion of mental images (for instance, the fusion of right and left visual fields resulting in stereo vision is an example of fusion of mental images by entanglement).
c) The notion timelike quantum entanglement making sense because of the failure of strict classical determinism for the dynamics spacetime surface, is crucial for understanding episodal memories as a sharing of mental images by the selves of the geometric now and past.
d) For spacetime surfaces classical energy can have negative sign as in case of Feynman diagrams (if time orientation is non-standard) since energy momentum tensor general relativity decomposes into a collection of vector currents labelled by isometry generators of Poincare group. This solves also the problems related to the definition of energy concept in General Relatiivity. In particular, negative energy topological light rays are possible and for various reasons they are ideal for spacetime correlates of time like quantum entanglement.
3. Also TGD approach assumes that memories are not stored in brain geometric now but in the brain of the geometric past where things happen.
a) The basic idea is that memory from a temporal distance of one years results by looking at a mirror at a distance of 1/2 light year. A more realistic view is based on the notion of magnetic body associated with human brain and body and consisting of closed magnetic flux tubes returning back to brain like field lines of dipole field. Magnetic flux tubes are analogous to wave guides along which topological light rays propagate. For quantum communication (resp. classical communication with subluminal effective phase velocity, say EEG phase velocity) this kind of wave guide has size scale of order light years (resp. Earth's magnetosphere).
b) The communication involved with long term memory recall is analogous to the emission and absorption of virtual particles topologically and in time scale of even light years. Negative energy virtual particles induce time like entanglement.
c) Non-episodal memories are communicated classically to the brain of the geometric now, typically highly symbolic memories are in question. Episodal memories are essentially re-experiences by timelike quantum entanglement. Re-experience is essentially sharing of mental images in geometric now and in geometric past.
d) For both mechanisms the desire to remember could be communicated by sharing of mental images at least for those memories which are not continually communicated to geometric future automatically (very important memories needed all the time, perhaps short term memories). Metabolic economy poses restrictions here and favouers telepathic communication of desire to remember.
4. As far as non-episodal memories are considered, brain can be said to serve as an aerial in the conventional sense. The classical signal from the geometric past would carry information about conscious experience coded to the classical field pattern in turn inducing some process giving rise to desired conscious experience.
For episodal memories quantum aerial mechanism would be involved: the timelike quantum entanglement would have as its correlate "topological light ray" but only its length (fundamental frequency) would matter since sharing of mental images would involve no transformation of information to symbolic form (classical field pattern). Enormous amount of information would be transferred but the problem is that most of this information is irrelevant for survival, and it is difficult to tell whether the memory is about geometric now or geometric past. Therefore this manner to remember has been loser in the fight for survival (eidetic memories, sensory memories of synesthetes, and memory feats of people with certain kind of brain damages, are an exception).
2. Musings on remote viewing, spacetime and topological geometrodynamics
(questions in blue by Lian Sidorov, answers in black by Matti Pitkanen)
LS I guess that RV goes back to what Patanjali called "looking into the nature of reality", connecting to the event itself - but how do you make that connection? What is the "event itself" - doesn't that too have to be translated into a particular language, or framework of reference?! And that requires an observer, a chosen perspective. There must be literally an ocean of impressions that one becomes aware of, which do not have a clear translation into our usual set of experiences... But I think that in time one probably learns to interpret some of them - say in intuitive diagnosis, where the viewer basically describes the pathology at the tissue and cellular level - this is entanglement with matter on a level we are not accustomed to - it's like seeing matter with its own eyes ;-)
MP I would talk about mental image instead of "event itself". Universe is full of mental images which correlate with what happens there. Some of these mental images are associated with brain, most of them with what we are used to call "dead matter". This kind of information is something very primitive: no linguistic structures making sense for remote viewer since they would require something like common memes at the level of DNA (introns). Could emotions and basic sensory qualia represent universal mental images?
LS There seem to be so many filters between viewer and target: 1. viewer's belief structure = sphere of consciousness; 2. consensus re: RV coordinate and protocol, time framework, tasking; 3. continuous noise from other minds who share our collective ("multibrained") consciousness; 4. input from our future and past self, which we most likely scan on a continuous basis, and which includes all possibly relevant feedback; 5. finally, the actual knowledge about the event, but which may reside outside the viewer's conscious sphere = confirmation within this lifetime. So in such cases it seems that the viewer's "sphere of knowledge" includes subconscious access to the collective mind = to a higher level Self whose lifespan may be many times that of the individual
MP Yes. I would add also 6. viewer's sensory input and all that is related to his motor actions, which is probably the dominating contribution which viewer must silence.
LS This brings us to next point - that all retro-pk experiments have been essentially defined by comparison with what was to be expected statistically - what we thought the outcome would have been without intervention. But if there is no difference in terms of observation between past and future, and observation is truly part of what is being observed, as all QM and psi evidence converge to prove, then the problem should not be defined in terms of violating causality but simply in terms of "event x" (the RNG data recorded on the disk) will be viewed through the perspective of the subjects trying to influence it. It's another way of saying that "time" has not been defined for the bits of data recorded on the disk and will not be "flowing" until the conscious intent to influence each particular bit.
If we accept that time is a construct, that time is given by our own perspectives, why not not accept that time is an ingredient of each "event" - that this ingredient only materialized in the presence of observation of that "event" - and hence that "time" may not flow evenly for all objects in the universe, as we normally picture it: it is not a great flood overtaking all objects in the universe with the same front wave, but something that emerges, is micro-generated at the level of each object/object system as it is consciously observed?
For this reason I don't think we ever connect to "absolute reality" - I think the only way it makes sense to talk about a specific target at a specific time is if the information is also encoded as a function of observers' perspectives - rather than in absolute, unobserved terms.
MP Yes. The idea about absolute reality does not make sense to me. Each moment of consciousness replaces "absolute reality" with a new one.
LS Now, in TGD, you say that spacetime is a correlate of consciousness, that consciousness is not embedded into spacetime.
LS Can you then say that spacetime derives from intersecting observations?
MP I would say that spacetime surfaces provide (very) unfaithful representations for our contents of consciousness. a) Here the nondeterminism of Kahler action and also p-adic nondeterminism come in:. Nondeterminism forces to replace spacelike 3-surface by a sequence of spacelike 3-surfaces with timelike separations as basic dynamical unit: in this wider framework one can save determinism. These dynamical units are like sequence of photos taken with certain time intervals: in old-fashioned classical physics world a single photo would be enough to fix future and past completely. These sequences of photos are spacetime correlates for quantum jump sequences and spacetime representation for quantum jump sequence which defines self and its conscious experience. b) This makes possible self reference. In qjumps self creates spacetime surface providing symbolic representation for the contents of consciousness it had before this q-jump. c) I would not say that *spacetime derives from....* but the *idea of spacetime derives from...* after many many abstraction steps: selves becoming conscious about what they are conscious of by generating this spacetime representation. Think what mathematicians are doing when they calculate. They represent their thought symbolically in structure of spacetime and can look at it without danger of forgetting it.
LS This metaphor is great! But I need to ask you again - whose conscious experience? Does the universe evolve slightly differently for each of us according to our own q-jumps/moments of consciousness? Or do all (humans, let's say) share one universe with the same unitary evolution as a result of our collective mental experience? Joe McMoneagle has mentioned (in a rather cryptic allusion) that we do not necessarily share the same reality - that we only assume that our universes intermix... What do you think he was referring to?
MP Every material island/quantum in TGD has a p-adic conterpart, primitive consciousness associated with it; material island can have p-adic counterpart providing a cognitive representation about it but this is not necessary. I would say that real spacetime sheets at the most general level are kind of symbolic representations and p-adic ones are cognitive representations. p-Adic nondeterminism makes the cognitive representations extremely flexible and they can be nonrealistic: they are only piecewise realistic. Like those "modern" paintings of past consisting of pieces in with different perspective. Locally realistic but not globally. This is necessary for cognition as discovery since it must always build the world view by guessing and fitting the pieces together again and again, trying to extend local realism to global one. In contrast to this real symbolic representations are always realistic.
LS Then the universe is just this - matter and consciousness, like thoughts of a supreme being popping up in an amorphous phase space - no metric... But the awareness of each quantum of matter - awareness of itself plus awareness of other quanta around it - that intersection of perspectives creates the "relativistic" metric of space and time?! If that is the case, then indeed both space and time are sheer illusions.
MP I would modify a little bit: creates the *idea about metric of space and time*. Spacetime sheets are something absolutely real (and also p-adic;-) but not very many selves in the universe have become conscious even about the ideas of metric and spacetime! Why I take spacetime so seriously is that without them I would have no physics. Second point is that monisms and dualisms lead always to a dead end: so would also matter and consciousness do. Matter understood as spacetime geometry - shape, size and classical fields; consciousness; and configuration space spinor field in this infinite dimensional space of 3 surfaces which Joe might call "absolute realities": tripartism.
(Hopefully to be continued with other, equally perplexed opinions on RV ;-)
1. Remote Sensing and the One Mind Model
From: Mark Germine
In the February 2003 issue of JNLRMI Lian Sidorov addresses the topic of Entanglement and Decoherence Aspects in Remote Sensing: a Topological Geometrodynamics Approach I would like to comment on a few elements of his article with respect to the One Mind Model of quantum reality and brain function (Germine, 2002).
On page 7 of the article, Sidorov writes: “Beyond the obvious implications of non-local information transfer, access to both physical and cognitive representations, and the apparent violation of causality suggested by pre-cognitively viewing a target that will later be chosen by a random number generator, the next salient feature that emerges is that this information transfer appears to be a little more complex than a mere ‘access to the universal Mind,’ or to that ‘extra dimension where there is zero separation’ between objects in time and space.”
From a systems perspective, hierarchies are constructed from the micro to the macro level. One such hierarchy would be the cell, tissue, organ, organ system, and organism. In Whitehead’s philosophy of organism, which applies both directly and indirectly to quantum theory, each element of the organism prehends, grasps, of feels each other element, as a “quantum” of experience or actual occasion. Thus the tissue is the nexus of its constituent prehensions. Each cell is “internal” to each other cell, but it is the concrescence of the relations among the cells that constitutes the whole. This is what Whitehead called process.
This is how I view the One Mind, as the concrescence of the universal whole. On page 9 Sidorov refers to the “overall matrix” of the “mind-matter ‘universal operator’” as an “evolutionary blueprint.” This is the process I call reciprocal causality, in which the whole constitutes the parts. I refer to the general theory by which both hierarchies and inverted hierarchies come into play as “reciprocal systems theory.”
So, to get back to the notion of information flow arising through “access to Universal Mind,” I would agree that its quite a bit more complex than this. Information flows in both directions. In Whitehead’s ontology the actual entity, the One Mind, would be the same as the actual occasion, the universal concrescence, or the “universal operator.”
The process that leads to the concrescence of the actual entity is called actualization. Actualization is equivalent to what is known in psychology as percept genesis, and occurs in what Whitehead called the mode of causal efficacy. The mode of causal efficacy is the quantum multiverse, which is characterized by variations in the Higgs Field and the fundamental constants, as well as be all the possibilities inherent in the wavefunction or “overall matrix” of a particular universe. In terms of psychology, the “overall matrix” is the collective unconscious in its widest sense. The classical universe is the universe that we perceive or know, in what Whitehead called the mode of presentational immediacy. In Platonic terms, the mode of presentational immediacy is like the shadows in the cave. It has no material substance or “thingness.” It is an appearance, no more, no less. Yet it is this appearance that is the substance of our everyday life. It is the creation of our Mind, the one actual entity that is the one actual occasion of the classical universe. Many of these ideas are outlined in a forthcoming book (Combs and others, 2003).
The mode of causal efficacy, the unconscious, and the “overall matrix” are one and the same. It is within this reality that all prehensions, internal connections, or non-local interactions occur. The classical universe exists because we see it, and for no other reason. We see it with our eyes, our ears, and our measuring devices. When consciousness, the local mode of presentational immediacy, descends into the unconscious, the non-local model of causal efficacy, remote sensing becomes possible. Remote sensing is the appearance of an appearance, our seeing of the shadows in the cave.
In experiments on the brain waves or ERPs generated by a random and theoretically uncertain stimulus (Germine, 2002), it was shown that these brain waves differ if someone has previously observed the stimulus. The stimulus, the first observer, and the second observer are all part of a single system or organism, so it is natural to assume that there will be prehensions or non-local interactions between them. The concrescence of the knowledge that the stimulus occurred, however, would involve only the stimulus and the first observer. In the One Mind model, this appearance of a stimulus is embedded in the appearance of a classical universe, and is a function of the “overall matrix” or “universal operator.” Black hole physics has taught us that this appearance is generated beyond the level of our individual minds.
On page 24 of Lian Sidorov’s article he considers two possible explanations for the difference between the brain waves elicited by the unobserved and pre-observed stimulus: 1) That the mind/brain of the first observer actualizes (or collapses the wavefunction of) the stimulus, and that this actualization is reflected in the processing of the stimulus in the brain. This was the hypothesis that the experiments were intended to test (Germine, 1998). 2) That the non-local interaction or entanglement of the two observers causes the differences seen between brain processing of the unobserved and pre-observed stimuli.
Sidorov hypothesizes that it is the presence of such non-local connection between the first and second observer that make their brain wave patterns different. Sidorov argues that if a number of other observers were not to perceive the “oddball” stimulus, but rather the “common” stimulus at the time both the computer and the first observer were processing the “oddball” stimulus, then this non-local effect would be reduced or nullified. Sidorov’s alternate explanation is testable, and should be tested.
If Sidorov’s hypothesis is validated by experiment, then we will have discovered the first experimental probe into the non-local interactions of two brains. This would have enormous implications for both science and medicine. If my original hypothesis is validated, we will have nothing short of a revolution in both science and medicine. As outlined above, I believe that both kinds of processes occur, and that it may be one or both that are validated in the experimental results.
Combs, A., Germine, M., Goertzel, B. (Editors). Mind in Time: The Dynamics of Thought, Reality, and Consciousness (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press 2003, Mount Waverly, Victoria, Australia.
Germine, M. (2002) Scientific Validation of Planetary Consciousness. JNLRMI (3). URL: www.emergentmine.org/germine3.htm
Germine, M. (1998) Experimental Model for Collapse of the Quantum. URL: www.goertzel.org/dynapsyc/1998/collapse.html