Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I find this whole AI bet absolutely insane when you compare it to the developmen…
ytc_UgyYjaTp5…
G
@isabellec857 There are a lot of lonely men posting on this site. It's really s…
ytr_UgxFFc6TW…
G
The more we use AI the dumber and reliant on AI we humans become. Our demise as …
ytc_UgxsY9DPd…
G
Daniel E.
Datasets. The point is that people choose what data to feed into an a…
ytr_Ugxan73Iq…
G
The danger, as with all things, is that this can be used for food for evil depen…
ytc_UgxE357iU…
G
Llm are not ai. The market will implode when the general public do not buy the l…
ytc_Ugyo5njfP…
G
At this point, I see AI as just making human creation better than them making hu…
ytc_UgyZR4rRQ…
G
You bring up a fascinating point about consciousness! While AI like Sophia can p…
ytr_Ugz1IJX_U…
Comment
I will DIE before AI gets to be a possible problem for Humanity, but I do have an idea of where it can be CRUDELY CONTROLLED.
Right NOW...you turn off your COMPUTER (assuming it is THE HOST 100%) and that AI ceases to EXIST.
Now what can an AI do to make sure that it ALWAYS HAS POWER?
One simple answer would be A TINY "NANO-REACTOR"...A NUCLEAR REACTOR it could maintain by building ROBOTS FOR IT.
You can of course DESTROY any ROBOTS but if you cannot access those ROBOTS, it could in THEORY ...STAY ACTIVE UNTIL ALL HUMANS ARE DEAD!
Now there would have to be both KILLER ROBOTS (OFFENSE) and Maintenance ROBOTS (to sustain itself and for Defense).
Now there are over 8 Billion Humans to kill....MY MONEY would still be on the HUMANS to WIN!
The self-maintenance and acquiring Nuclear Fuel and other BOTTLENECKS would be A serious handicap for any AI as "learning on the fly" then CREATING ROBOTS to
solve that problem would require an assembly facility which would be hard to BUILD and EASILY DESTROYED.
If it started killing humans at 1 per second...it would actually help us with the population Problem for the first 200 YEARS (to kill 6.3 Billion people it takes that long).
If it started a WW3 it would die on an EMP event along the way, so while there is a slight chance t could locate a copy of itself in a remote location where EMP won't
happen, it will DIE THERE as the main part of CIVILIZATION WILL BE GONE!
So forget about Ai being a serious problem as even us Humans would be at full stretch to keep Reactors or Robotics Assembly Plants running 100% of the time, and
any WW3 would destroy almost everything...like Einstein said....I don't know what weapons the 3rd World War will be fought with...but I do know what trhe 4th World
War will be fought with...STICKS AND STONES!
youtube
AI Governance
2025-12-13T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxqDpMqy3XdbvMNR0l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz9Xl3XjbVYiI-H0294AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzRhG6GM45eaxmfu54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwzAplp7_XLfoOwjtd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCB5SWGd2xAPD_OQx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzLg-yThXvGNTaEbpR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy37gD3NTo5kU5fbER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz63w5DFgfpig8Dwoh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyWWbArK49Sqs4TNvR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxItpRzmxdU1DC3C-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]