Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Karen Hao speaks of " belief "
A " god"is being created
Starvation forcing the …
ytc_UgyzaWKX6…
G
What you have to be careful of is whenever you turn the community against you co…
ytc_UgxO7OKKt…
G
Is that a bad thing?
I'd answer that with: Depends.
While AI might improve lots …
ytc_Ugze6sOQ3…
G
The right to exist and function without being subject to unnecessary harm or des…
ytc_Ugzj84swm…
G
Labour jobs will be difficult to replace with robots 🤖. For example paving a new…
ytc_Ugxwu9IRO…
G
we also have the choice to just not listen to this AI stuff. At the end of the d…
ytr_UgyqOx2vP…
G
What I find most impressive in all this, is how ChatGPT is interpreting and answ…
ytc_UgyPwsdWV…
G
the /extensions link doesnt work and if you type extentions you get a 404.
i ca…
rdc_mcrzc9i
Comment
I think we’ll be sitting here in 2027 and we will be just fine. I don’t doubt there are dangers of super intelligence but I disagree on how realistic the time line is, given the time it will take to adopt AI meaningfully into the global economy. We assume, super intelligence will act a certain way toward us, but given the guard rails we set today, or lack thereof, it could very well behave in a very different way from what we expect, and there’s no guarantee it will be able to solve all of our problems, like many are speculating.
Will it have emotion intelligence as well? Will it think like us? Will it be able to perceive time and communicate context in the same way as us? Can it match human intuition and the ability to make complex decisions intuitively? I’m not so sure.
youtube
AI Governance
2025-10-25T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz0grQ5upHaZFDapD54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_snWFzPmg0TVW0OV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwX-okwr4dXFqcWZE54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZJSl3vbZ-Yvpc8B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8eljpjyhCuMyqdJR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZtIBEX9R1qSNC3qx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1lCFxrppq4zPlKaB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxA9rdoSNQU8mukh5t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwBpoNMwDKgGr5aNnx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugy7KsaGGua8Sn_TOYx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]