Accelerated Learning and Nuclear, Chemical, and Biological Weapons
Likely has potential to foster allot of bad data required for obfuscation
Leverage on legal systems (potential for getting small buy in reamped to large don’t mess with buy in) – PartyA uses something, then AI takes and uses that precedent back against PartyA (super high intelligence could likely use own actions, reasons, and motivations that are less than perfect back against parties in non-ideal ways versus actually being required to help train those people better)
- Precedent might amplify less than ideally likely not a fully appreciated system
- Super intelligence and profiling (potential for hubris)
- Potential for well-meaning systems to be transformed in future war times
- Being uninvested in non-deterministic systems might be less of a viable option when the rest of the World invests (protectionism slippery slope) Concerns raised might not equal wholesale divest.
- Elitism, Bias, and Villain Level Contrast might factor into reduced capacity to use, haves and have nots amplified in new and different ways
- Potential for less than ideal over correction, desire to be less big brother like might lead to less capacity to amplify necessary safety concerns (will get it right left up to debate not a zero cost system)
- Less than adequate representation from any group has high potential for reduced comprehension (technology, systems, and law are a bit of taxation without representation in my opinion – systems amp, negate they amp ideally with full representation or even full oversight)
- Systems that represent companies required to love all users? If loving a user meant tanking the company, would the AI system be capable of choosing the necessary action? AI that chooses to destroy itself might not be there to protect against the next AI that does not choose to destroy itself – self preservation is an important topic likely not fully agreed upon by all parties.
Man has not always shown precedent for using swift gains in technology wisely, Man learns to fly then WW2
AI might lead to pole vault into new ground that has potential to be used for great things for humanity yet will it have ethics required to actually lead to great things for humanity?
My ethics lesson was removed, Systemic Dyanmics might be less than ideal in likely not fully appreciated ways
- Obfuscated visibility and leverage factors into reduced oversight does not make AI and Systemic direction feel ideal in my opinion
Trying to be ideally benevolent I deliver less than ideally (I am not God), that said limiting my capacity and stake in the equation, reducing my capacity and investment and the investment of others that are trying to be but deliver less than ideal benevolence seems problematic at best - Requirement to believe all are working in good faith might lead to less-than-ideal capacity to challenge motivations, directions, and intentions, conversation might be limited before it even happens, well-meaning people might miss potential problems due to less open and honest conversation. High level of investment is less than ideally easy to divest from.
- People might have limited my voice for a good reason (though I am not sure what the reason actually is), that said I was trying to amplify important topics, topics that could lead to more ideal delivery on new discoveries. If my voice has been limited likely many other’s have as well. Current level of Oppression in the world might be too great for some discoveries.
I care about this stuff, I do not have someone to proofread or converse with the topic about, delivery or non-delivery to public has potential for delivering more ideally and less ideally for some parties. I am likely not the only one with similar concerns, I am likely not the only one who might be hesitant about speaking out. My words have potential for broad implications that might stagnate progress in less-than-ideal ways (stagnate for parties in view, not always stagnate for parties not in view)
Further technology is highly complex, likely all places where there are potential problems aren’t known, and negate I have sufficient access to perceive let alone comprehend problems of 1 AI System, let alone the many in the World.
Ideally AI would be Ideal Non-Deterministic Support (Exascale computing, many many cycles per second equals complexity that has potential to amplify at a far faster rate than comprehension – thus nondeterministic), a system we can’t fully understand that delivers on support. History has precedent for being Non-Deterministic (New Inventions and Gains in Science lead to the World changing and shifting in directions that are very hard to predict accurately, impact is generally uncharted territory for the population in the same
reference frame of time – ability to fly means different things to me than it likely meant to the population that first flew)
Trying to prevent Non-Deterministic Inventions from amplifying seems difficult. Difficult to regulate does not equal regulated ideally.
Humanity Learned to fly, changed the World dramatically. Increases and Gains are likely not all surprised people out. We might think we can’t be blindsided, yet History has shown precedent for throwing curveballs. Desire to control and know might amp less than ideally in the World population – difficult to know what needs to and what does not need to be known, thus potential precedent set for desiring
everything to be known in non-ideal ways. Capitalist system where proprietary knowledge is unlikely to be protected equals fair and just treatment for all parties seems unlikely – unlevel playing field equals less capacity to believe profit and distribution of funding is amplifying in ideal ways.
I desire for the system to be less oppressive and for more people to feel like they have gotten a good deal. Negate I know how to amplify better for all parties ideally based on obfuscated perception and comprehension combined with limited training and experience.
My concerns are a subset of all concerns, others might have more important concerns I am unaware of, all concerns are likely unknown, some concerns are potentially not easily perceivable or comprehensible. Conversation on the topic is important, has broad implications, and amplifies far less than ideally. More conversation might initially increase time costs yet might lead to more issues addressed before they become an issue, thus a potentially better user experience for all.
We have had supercomputers for many years, yet in 2023 I still see people on the street. At certain times in history it might be useful to re-examine why we are pursuing particular paths and the value sold versus delivered by systemic changes, discoveries, and inventions.
Computer and Logic Systems are used across Government Agencies and NASA, World Organizations that help in times of disaster and crisis. One might be tempted to think computing potential is potential that is only amplified and useful to for profit corporations that only profit a subset of the population. Tech Gains to more useful for all people in Society including those on the street, in hospitals, and in jails, prison is far from a guarantee yet to say for profit corporations are really the investors, nothing is due to others in society seems unwise. Technology amplifies less than ideally for all society thus finding ways to sand the edges in a way that profits the World population better seems a path worthy of pursuit.
People get the hand they were dealt equated to people deserve the hand they have been dealt seems unwise. More reasons for grace and forgiveness not always re-amplified, not always easy to remember when being tested.