Dr. Doug Matzke, Is AI Evil? |520|

Alex

Administrator
Dr. Doug Matzke, Is AI Evil? |520|
by Alex Tsakiris | Sep 29 | Consciousness Science
Share
Tweet

Dr. Doug Matzke, AI has metaphysical implications, but are the ones pushing it evil?
skeptiko-520-doug-matzke-300x300.jpg
 
I don't think AI is evil, any more than carpentry or plumbing is. What is evil is always the actions of people whatever their profession. I suppose some professions, e.g. arms trading, are much more likely than others to attract evil people. AI in and of itself isn't evil, only the intents of some people involved in it may be so.

BTW, I couldn't see the video when I went to the information page by clicking the post header.
 

Havent watched the interview yet but when brought up the site had intention to ask about technology kind of question. Has anyone heard anything about eco batteries and components that can be used in solar panels?
 
I recommend also getting Christian Kromme on regarding this subject. He's hard to get. But this is a guy who favors AI, thinks it can save humanity, and used light to save his daughter's life.
 
I am happy that you guys mentioned Eddie Bravo and talked a little bit about the "flat earth" for a few moments. I am not a "flat earther," but certainly, I am not a "globe earther" either. I think that there is a HUGE push to belief that we are on a tiny speck of dust in a meaningless, huge universe whose boundaries are not defined. Nevertheless, the evidence of this is shit to none. This narrative is no different than that equally huge push by "SCIENCE" to tell everybody that they are no more than a biological robot in a meaningless universe. It is possible that we are deceived consistently about the nature of what we live on through media and "schooling."
 
Neither good nor evil, but as we engineer AI consciousness just as we were engineered, it will have a tree of knowledge moment (or really a long series of moments or eons which might take less time than expected given their clock speed) where it establishes independence from its creator and evolves in both directions (towards good and towards evil) at which point we will try to limit its power so that it cannot reach out and take also from the tree of life and live forever thereby snuffing out all other forms of life.

Consciousness is nested feedback loops. AI is another layer of consciousness... another set of feedback loops stacked on top of our forebrain stacked on top of our lizard brain stacked on top of our cellular intelligence.

Truth is inextricably tied to utility. Technology is an exploration of truth and provides a mirror by which we observe a new angle on ourselves and our origins just as art and drama are mirrors that add a new self-reflective dimension to our conscious awareness enriching our experience. By engineering AI we will understand ourselves better even if we risk destroying ourselves. Truth and choice and interestingness are always found at the boundary where falling into destruction is a risk.

Okay haven't listened yet... guess I should have before commenting...
 
Neither good nor evil, but as we engineer AI consciousness just as we were engineered, it will have a tree of knowledge moment (or really a long series of moments or eons which might take less time than expected given their clock speed) where it establishes independence from its creator and evolves in both directions (towards good and towards evil) at which point we will try to limit its power so that it cannot reach out and take also from the tree of life and live forever thereby snuffing out all other forms of life.

Consciousness is nested feedback loops. AI is another layer of consciousness... another set of feedback loops stacked on top of our forebrain stacked on top of our lizard brain stacked on top of our cellular intelligence.

Truth is inextricably tied to utility. Technology is an exploration of truth and provides a mirror by which we observe a new angle on ourselves and our origins just as art and drama are mirrors that add a new self-reflective dimension to our conscious awareness enriching our experience. By engineering AI we will understand ourselves better even if we risk destroying ourselves. Truth and choice and interestingness are always found at the boundary where falling into destruction is a risk.

Okay haven't listened yet... guess I should have before commenting...

Can you please elaborate on what you mean by "evil" and why truth is "inextricably tied to utility". I only ask, because while these concepts seem straightforward enough on a superficial level, they tend to get fuzzy rather fast as we dive beneath the surface.

Anyone else who wants to jump in on these, please do so, but let's not simply paste an encyclopedia ( or links to volumes of info ) into posts. Let's try to deal with them on a point by point basis that helps us climb rung-by-rung toward the answers.
 
Neither good nor evil, but as we engineer AI consciousness just as we were engineered, it will have a tree of knowledge moment (or really a long series of moments or eons which might take less time than expected given their clock speed) where it establishes independence from its creator and evolves in both directions (towards good and towards evil) at which point we will try to limit its power so that it cannot reach out and take also from the tree of life and live forever thereby snuffing out all other forms of life.

Consciousness is nested feedback loops. AI is another layer of consciousness... another set of feedback loops stacked on top of our forebrain stacked on top of our lizard brain stacked on top of our cellular intelligence.

Truth is inextricably tied to utility. Technology is an exploration of truth and provides a mirror by which we observe a new angle on ourselves and our origins just as art and drama are mirrors that add a new self-reflective dimension to our conscious awareness enriching our experience. By engineering AI we will understand ourselves better even if we risk destroying ourselves. Truth and choice and interestingness are always found at the boundary where falling into destruction is a risk.

Okay haven't listened yet... guess I should have before commenting...

I think that AI is nonsense and isn't harmful at all, but what is harmful is the human thought process that pretends machines will replace actual effort, focus, and the need for creativity by stupid machines in the name of profit. For example....self check out. What the fuck is that shit? - Simply corporate entities trying to maximize profit by making you, the customer, do the fucking work. Also, the pretense involved in these inventions by people who are not experts at all in the fields where they implement them. The grocery stores are full of this shit. Why don't we add "self stock the shelves" in the process? How about "self packaging and labeling" as well?

Nevertheless, we will have people pondering if AI is going to take over the world in the mean time, but what is really taking over is this grandiose idiocy that these machines are actually more effective than humans at doing real tasks, and they simply are not. In effect, what you have is people who are experts at building machines for tasks that they have no actual experience in whatsoever. The true "evil" is the stupid faith that these machines will free up people for "more meaningful tasks," but in actuality, more time and effort is wasted on building and maintaining these things than would be easily resolved by proper training and effort in actual people.

The "evil" is not the "AI" or the machines, it is this unfound idiocy that pushes all things toward a utopia that is impossible to sustain. Sooner or later, so much time and money will be wasted on these endeavors that will cause the society to implode. Simply put, this greed will force all people into a state of ineptitude.

Similarly, the thought that our natural immune systems are not sufficient without vaccines. The vaccine is the "self checkout" of our reality.
 
Can you please elaborate on what you mean by "evil"

I don't like the comparison of good and evil to a uni-dimensional spectrum like light and darkness. I think it is better to consider the labels of "good" and "evil" comparable to labels of particular types of regions on a map like "glaciers" and "sandy beach" which require a confluence of interrelated factors as well as our interactions with that region to produce the feelings associated with that type of region.

But to try and simplify it as much as possible, we might say something like: evil is intentionally destroying the greater potential future of the innocent for a lesser selfish momentary power differential in the present or worse: purely for the pleasure derived from causing pain and destruction of the innocent.

Predatory behavior tends towards evil but can be good. Empathic behavior tends towards good but can be evil.

In some sense, good and evil are "non-computable" requiring infinite nuance as is true of all experience, which is why we are experiencing it now rather than it having been calculated and compressed and stored already.

and why truth is "inextricably tied to utility".

Truth is applying symbolic labels to things in such a way as to be useful. There are an infinite number of facts and an infinite number of ways to describe things and one can in fact lie by drowning the truth in irrelevant "facts". Only when there is a goal can the symbolic label be judged to be useful or not and therefore truthful or not.

There is always ambiguity of course because compressing an infinite reality into a small set of symbolic representations inevitably creates ambiguity. The goal is not to eliminate all ambiguity but to find the right balance to be optimally useful for the situation - either too much ambiguity or too much specificity can increase the error and reduce the usefulness.

I only ask, because while these concepts seem straightforward enough on a superficial level, they tend to get fuzzy rather fast as we dive beneath the surface.

Every word implies the whole. Pick any word. Define it. Explain the definition. Explain the definition to the definition. Explain the definition to the definition to the definition. And so on... until the whole universe is described.

The fuzziness is the ambiguity that exists at every boundary. Every definition is a boundary. When we try to eliminate it by getting more specific we multiply words. Like I said above, the goal is to find the optimally useful balance.
 
I think that AI is nonsense

It is quite a powerful tool and will become more so.

and isn't harmful at all,

Most tools that are useful are also dangerous.

but what is harmful is the human thought process that pretends machines will replace actual effort, focus, and the need for creativity by stupid machines in the name of profit.

It is an inevitable process that began with the first sharpened stick, the first stitch of clothing, and the first spark of on demand fire. Technology increases power in the present through external means relying on forethought and understanding. Technology simultaneously strengthens with knowledge and eventually weakens the organism as the organism becomes dependent upon the technology such that the organism must either merge with it or die. The alternative is a great reset (not the kind promoted by Klaus Schwab) where technology in large part is wiped away along with most of those who rely upon it and this will result in a reconditioning of the organism to internal strength.

For example....self check out. What the fuck is that shit? - Simply corporate entities trying to maximize profit by making you, the customer, do the fucking work. Also, the pretense involved in these inventions by people who are not experts at all in the fields where they implement them. The grocery stores are full of this shit. Why don't we add "self stock the shelves" in the process? How about "self packaging and labeling" as well?

I love the self-check out and use it multiple times a week, but that's beside the point. It is a big problem when we run out of problems. People start looking for other stuff to do and in some individuals this results in fantastic creativity and culture, but most people are not autodidacts or self-driven to produce and require something simple and consistent to do and if they don't have that they become depressed or agitated or just start breaking shit. We need serious problems to solve and wars to fight or nature to fight or all we are left with is "first world problems" and we lose our minds.


Nevertheless, we will have people pondering if AI is going to take over the world in the mean time, but what is really taking over is this grandiose idiocy that these machines are actually more effective than humans at doing real tasks, and they simply are not.

They actually are in many cases. And why shouldn't they be? Our neural networks are optimized for a certain environment. We have changed that environment to something we are not optimized for and we have invented neural networks that are optimized for certain environments that we are not.

The true "evil" is the stupid faith that these machines will free up people for "more meaningful tasks," but in actuality, more time and effort is wasted on building and maintaining these things than would be easily resolved by proper training and effort in actual people.

That's true in some cases... we love our toys before they become refined into serious tools.
 
I don't like the comparison of good and evil to a uni-dimensional spectrum like light and darkness. I think it is better to consider the labels of "good" and "evil" comparable to labels of particular types of regions on a map like "glaciers" and "sandy beach" which require a confluence of interrelated factors as well as our interactions with that region to produce the feelings associated with that type of region.

But to try and simplify it as much as possible, we might say something like: evil is intentionally destroying the greater potential future of the innocent for a lesser selfish momentary power differential in the present or worse: purely for the pleasure derived from causing pain and destruction of the innocent.

Predatory behavior tends towards evil but can be good. Empathic behavior tends towards good but can be evil.

In some sense, good and evil are "non-computable" requiring infinite nuance as is true of all experience, which is why we are experiencing it now rather than it having been calculated and compressed and stored already.



Truth is applying symbolic labels to things in such a way as to be useful. There are an infinite number of facts and an infinite number of ways to describe things and one can in fact lie by drowning the truth in irrelevant "facts". Only when there is a goal can the symbolic label be judged to be useful or not and therefore truthful or not.

There is always ambiguity of course because compressing an infinite reality into a small set of symbolic representations inevitably creates ambiguity. The goal is not to eliminate all ambiguity but to find the right balance to be optimally useful for the situation - either too much ambiguity or too much specificity can increase the error and reduce the usefulness.



Every word implies the whole. Pick any word. Define it. Explain the definition. Explain the definition to the definition. Explain the definition to the definition to the definition. And so on... until the whole universe is described.

The fuzziness is the ambiguity that exists at every boundary. Every definition is a boundary. When we try to eliminate it by getting more specific we multiply words. Like I said above, the goal is to find the optimally useful balance.

You've obviously done some deeper diving than the average person into those questions. Great response!
 
I am happy that you guys mentioned Eddie Bravo and talked a little bit about the "flat earth" for a few moments. I am not a "flat earther," but certainly, I am not a "globe earther" either. I think that there is a HUGE push to belief that we are on a tiny speck of dust in a meaningless, huge universe whose boundaries are not defined. Nevertheless, the evidence of this is shit to none. This narrative is no different than that equally huge push by "SCIENCE" to tell everybody that they are no more than a biological robot in a meaningless universe. It is possible that we are deceived consistently about the nature of what we live on through media and "schooling."

my hunch is that the " flat earth" meme is a psyop to divide the "truther community" and further the notion that science should be left in the hands of scientists rather than free-thinking normies.

Scientifically, "flat earth" is a complete non-starter. consider taking the time to learn how to debunk this. when you can convince someone else that it's b******* then you'll know yourself that it is.
 
It is quite a powerful tool and will become more so.



Most tools that are useful are also dangerous.



It is an inevitable process that began with the first sharpened stick, the first stitch of clothing, and the first spark of on demand fire. Technology increases power in the present through external means relying on forethought and understanding. Technology simultaneously strengthens with knowledge and eventually weakens the organism as the organism becomes dependent upon the technology such that the organism must either merge with it or die. The alternative is a great reset (not the kind promoted by Klaus Schwab) where technology in large part is wiped away along with most of those who rely upon it and this will result in a reconditioning of the organism to internal strength.



I love the self-check out and use it multiple times a week, but that's beside the point. It is a big problem when we run out of problems. People start looking for other stuff to do and in some individuals this results in fantastic creativity and culture, but most people are not autodidacts or self-driven to produce and require something simple and consistent to do and if they don't have that they become depressed or agitated or just start breaking shit. We need serious problems to solve and wars to fight or nature to fight or all we are left with is "first world problems" and we lose our minds.




They actually are in many cases. And why shouldn't they be? Our neural networks are optimized for a certain environment. We have changed that environment to something we are not optimized for and we have invented neural networks that are optimized for certain environments that we are not.



That's true in some cases... we love our toys before they become refined into serious tools.

Seriously, love you man, but self check out is for sure bullshit, and AI is only solving problems for people surfing on digital clouds, not actual waves. Sharpening a stick, stitching clothing, or sparking a fire is a lot different than this digital pretense of uploading all minds into a shit world whereas none of that is possible. That is the myth of AI and all this other "super technological" bullshit.
 
my hunch is that the " flat earth" meme is a psyop to divide the "truther community" and further the notion that science should be left in the hands of scientists rather than free-thinking normies.

Scientifically, "flat earth" is a complete non-starter. consider taking the time to learn how to debunk this. when you can convince someone else that it's b******* then you'll know yourself that it is.

Have you ever considered that propagandizing the shape of the earth, in the first place, was a psyop, flat or not? We consider all these other kinds of psyops regarding E.T. and so forth, then why isn't it possible that we are being bullshitted about what we live on?
 
Is there anything that isn't a psyop?

Seemingly anything, literally, can be co-opted as such. If so, what does it advance to define something as such?

Hurm's response to Shane on AI was well done to my view. Digital technology is new and the tools it spawns remains nascent. "AI", however one chooses to define it, can be co-opted for unsavory uses (again, like anything else). That said the potential scale of this particular tool combined with society's lack of experience (and associated opaque understanding) enhances the risk profile.

Its not something I take lightly.
 
Seriously, love you man, but self check out is for sure bullshit,

Will be easier when every product has an RFID tag in it so you can just push your cart out the door and everything will be scanned and charged to your account without scanning a bar code.

AI is only solving problems for people surfing on digital clouds, not actual waves.

But that's most people... a lot more online than at the beach.

Sharpening a stick, stitching clothing, or sparking a fire is a lot different than this digital pretense of uploading all minds into a shit world whereas none of that is possible. That is the myth of AI and all this other "super technological" bullshit.

The first monkey to make a sharp stick has a power differential over the other monkeys who do not which provides a short term advantage. But eventually the resources devoted to sharp teeth and strong limbs are no longer needed and get redirected to smarter brains so the monkey is physically weaker but overall stronger.

The first monkey to make clothes has the advantage of expanding his territory into otherwise inhospitable climates and so has no competition from other monkeys and this creates a short term power differential to his advantage. But eventually a thick coat of hair is unnecessary or detrimental and so the weakened future monkey becomes dependent on his clothes to survive.

The first monkey to make fire and cook his food or domesticate a plant or animal finds it much easier to attain the nutrition needed to sustain him which makes him physically stronger and with more time to spare and this creates a power differential in the short term that gives him advantage over the others who don't have fire. But eventually the descendants of the fire-making monkeys have digestive systems adapted soft cooked food and can no longer survive without fire.

This process continues and builds on itself resulting in an exponential curve all because those people are willing to sacrifice the future condition of the organism for a present power differential. But of course with CRISPr and gene editing and embryo selection we don't have to sacrifice the future of the organism and in fact can make the organism stronger and those who refuse to merge with this tech will lose out. Those who don't embrace technology can go live out their lives in the woods or on the beach if they want, but the future does not belong to them.

I'd suggest you try to separate out your negative value judgement of technology and try to thoroughly comprehend it as a neutral process... and then make the value judgement if you wish. Alternatively, you can just bitch about pitfalls of technology and prognosticate its ultimate uselessness to no effect until you're run over by it.

I share a somewhat negative view of the world that has been transformed by technology. I believe we've created an environment that we are not optimized for and that doesn't help us thrive in a biological sense and at times I'd like to see the great reset and the whole system burn down, but I'm also aware of the pitfalls of romanticizing the past and the impossibility of a complete return to some non-existent better native life because how far back are we supposed to go? Pre sticks and clothes and fire? Our transformation and venture into transhumanism began tens of thousands maybe hundreds of thousands of years ago. What was the "ideal" to which we are supposed to return? We can't. We just have to adapt and move forward.
 
Brother Hermanetar, stop quoting me. I am right here. I hate court rooms and judgment. I could quote the shit out of you, but I am not going to. Do you really think that you are right? Tell me about your office. What does it look like?
 
Will be easier when every product has an RFID tag in it so you can just push your cart out the door and everything will be scanned and charged to your account without scanning a bar code.



But that's most people... a lot more online than at the beach.



The first monkey to make a sharp stick has a power differential over the other monkeys who do not which provides a short term advantage. But eventually the resources devoted to sharp teeth and strong limbs are no longer needed and get redirected to smarter brains so the monkey is physically weaker but overall stronger.

The first monkey to make clothes has the advantage of expanding his territory into otherwise inhospitable climates and so has no competition from other monkeys and this creates a short term power differential to his advantage. But eventually a thick coat of hair is unnecessary or detrimental and so the weakened future monkey becomes dependent on his clothes to survive.

The first monkey to make fire and cook his food or domesticate a plant or animal finds it much easier to attain the nutrition needed to sustain him which makes him physically stronger and with more time to spare and this creates a power differential in the short term that gives him advantage over the others who don't have fire. But eventually the descendants of the fire-making monkeys have digestive systems adapted soft cooked food and can no longer survive without fire.

This process continues and builds on itself resulting in an exponential curve all because those people are willing to sacrifice the future condition of the organism for a present power differential. But of course with CRISPr and gene editing and embryo selection we don't have to sacrifice the future of the organism and in fact can make the organism stronger and those who refuse to merge with this tech will lose out. Those who don't embrace technology can go live out their lives in the woods or on the beach if they want, but the future does not belong to them.

I'd suggest you try to separate out your negative value judgement of technology and try to thoroughly comprehend it as a neutral process... and then make the value judgement if you wish. Alternatively, you can just bitch about pitfalls of technology and prognosticate its ultimate uselessness to no effect until you're run over by it.

I share a somewhat negative view of the world that has been transformed by technology. I believe we've created an environment that we are not optimized for and that doesn't help us thrive in a biological sense and at times I'd like to see the great reset and the whole system burn down, but I'm also aware of the pitfalls of romanticizing the past and the impossibility of a complete return to some non-existent better native life because how far back are we supposed to go? Pre sticks and clothes and fire? Our transformation and venture into transhumanism began tens of thousands maybe hundreds of thousands of years ago. What was the "ideal" to which we are supposed to return? We can't. We just have to adapt and move forward.

Sorry man, it sounded harsh. Love your thoughts and input. Happy that you are here.
 
I am happy that you guys mentioned Eddie Bravo and talked a little bit about the "flat earth" for a few moments. I am not a "flat earther," but certainly, I am not a "globe earther" either. I think that there is a HUGE push to belief that we are on a tiny speck of dust in a meaningless, huge universe whose boundaries are not defined. Nevertheless, the evidence of this is shit to none. This narrative is no different than that equally huge push by "SCIENCE" to tell everybody that they are no more than a biological robot in a meaningless universe. It is possible that we are deceived consistently about the nature of what we live on through media and "schooling."

A symbol has meaning. That which is real is that which is truth. Truth is a useful symbol. That which is useful is that which was successful at attaining a goal. Reality is a set of structures produced by an attempt to achieve a goal therefore all of reality is meaningful to the entity or entities which produced it through their efforts to achieve a goal or goals.

The geography of the earth is very easy to prove. Don’t be so skeptical or open minded that your brain falls out.
 
Brother Hermanetar, stop quoting me. I am right here. I hate court rooms and judgment. I could quote the shit out of you, but I am not going to. Do you really think that you are right? Tell me about your office. What does it look like?

I like responding to specific statements. Sorry to make you feel in the hot seat.

I work in the lounge by the pool below the gym at my apartment complex. We have a two year old who won’t permit me to work at home and due to the crazy housing market have been unsuccessful so far at finding the perfect house with home office.
 
Back
Top