Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know this, and every fight to stop AI (or by proxy due to AI) is already ove…
ytc_Ugwymg3p7…
G
First time I see a Real robot with that perfection other wise there are many but…
ytc_Ugyv5rEuc…
G
@SomeUserNameBlahBlah That’s a fascinating thought! With how fast automation is …
ytr_Ugx2CNzu2…
G
It works I guess, but Ryne AI Humanizer is on another level. Give it a try…
ytc_Ugy2YezSw…
G
Yes I feel that ....1 line q and ai give me ans like he's a friend or teacher…
ytc_Ugzr9yDdN…
G
This is when everyone who can do something to stop would have the balls to do so…
ytc_UgzGwim3r…
G
From what I have tried with ChatGPT, here are my thoughts:
1. It can come up wit…
ytc_Ugxukm7yU…
G
AI itself is dumb. It is only good if it is used correctly by humans…
ytc_Ugz5HOQ6N…
Comment
Incredible, thank you all. Sum, narrow ai (chess) AGI (artificial gen intelligence) super intelligence - smarter than humans in all domains. 2027 - agi. 99% unemployment. 2030 - smart robots. 2045 - singularity, development of new tech in minutes, humans will not understand. 2100 - free of human existence or we will not understand anything. Impact: vast unemployment. No re-training. AI can create wealth however how will humans respond to no work? Maybe crime, pregnancy rates. Governments under-prepared. Companies have no legal / moral obligations save make money for shareholders. Don't know how to make it safe or fix it. They won't figure it out.
AI is a paradigm shift, a meta invention, won't be able to turn it off. Done right ai can solve climate change, wars, done wrong, humans will be gone. Ai creators do not know what's going on with AI. People leaving Openai to start ai safety firms. He advocates focusing on narrow ai uses, not super intelligence. He thinks we are living in a simulation. He's into longevity and bitcoin.
youtube
AI Governance
2025-09-04T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzTFC_CMP3_4hQgRfN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxC45CBJUnsUkmhN_h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnnX_e7hIlgQmnM3J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKirEQPRHbVdP_Oyh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0BtUmYcHmkZWpv54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwC5AdQwvZMlPABPwJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx6yvcWORR6JPGl6b94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFIIWDe0sMTgUH8a54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwlZ7_GaP5iGqF7gq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzt3XrwZzxVnQJu1mt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]