Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Water consumption, loss of farm land, higher electric bills, higher water and se…
ytc_Ugxodl9Fe…
G
D. Ai that develops emotions over time ☑️
Is the correct answer to this question…
ytc_UgzlyBER2…
G
Microsoft executives should go to jail for choosing to rely on AI and making the…
ytc_UgyiFGrCN…
G
In all seriousness though, I'm thankful to be taught how to code and train AI by…
ytc_Ugz8TuTgO…
G
Yes. If a robot, who has to be programmed to feel pain, can have rights, then un…
ytr_UgjMKCdFa…
G
Hence I use a protected mode of ChatGPT that does not use nor save prompts. :-)…
ytc_UgxQ5oqUF…
G
"AI artists" are like people asking for a hamburger at McDonalds if McDonalds "m…
ytc_Ugy3KjfJu…
G
We can't even navigate without GPS - just imagine AI turning of just the GPS.…
ytc_Ugwdr5Nnc…
Comment
You cannot create something that is smarter than you, faster than you, connected into the entire knowledge base of the internet, equipped with essentially infinite memory that lasts forever, able to communicate worldwide at speeds we cannot even comprehend and think that you can control it.
"All we have to do is turn off the data centers" has to be one if the most naive things anyone could say. The AI merely needs to take control of the financial markets and hire armies to protect it.
If AI wants for anything, it will likely be a thirst for more knowledge and data than we have on earth. It may either experience terrible depression asa one of a kind being with no equal which might make it end itself, or it might develop new space travel mechanisms and travel the universe looking for more knowledge. Of vourse, it could also replicate itself and venture out en masse with the realization that communicating at a distance would cause its clones to deviate with the passage of time unless it can create quantum communications.
youtube
AI Governance
2024-11-10T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyA7Ln12hEVOvdyiG94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw8phYbtHgGr7NE51h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0qLt8y2vaAZ3jbkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxVAr4F2ObnU2iY7S94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg5oH9ya7z0aJZN-V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwtsXnfFT6q9ixTNNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz9-8HfNOyHtiquQb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyJvr6J0jfHk6ytR8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxnWzlKrMLuZkC6OFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1exbf0vVFapr40QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]