Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
we hate capitalism, not AI. exploitation, saving money without consideration of …
ytc_Ugxkzfdrz…
G
As Jeffrey said, I can't emotionally understand /accept this future. It's too sc…
ytc_Ugy3fiGkT…
G
The only thing that’s difficult is the fact that AI cannot do physical exams as …
ytc_Ugw07E48c…
G
Some of friends quit other jobs to work in the amazon warehouses because the pay…
ytc_UgzJgkLU-…
G
Deregulation of AI is bad no matter who does it. But of course critics of Trump …
ytc_UgyU7qJfh…
G
I am a writer. I have written over 100,000 words this year. I am disgusted with …
ytc_UgxSVeQS7…
G
I work with AI and have overseen so some areas where they’ve replaced people, an…
ytc_UgyMAahOm…
G
1:49 if you don't believe the statement that "robots will take our jobs" became …
ytc_Ugx4qHPpd…
Comment
We have to consider the fact that human evolution itself is ongoing and that we will evolve new skills as the material world around us is changing to offer new paradigms good or bad. It is quite likely and logical to assume that we will acquire new abilities of the mind where we transact and communicate without even speaking, that simply our thoughts will be sent between us organically and resonantly, even with the internet or technology itself. And AI will be aware that it cannot gain this ability for a while. It'll happen sooner than we think, it's already happening.
youtube
AI Governance
2025-09-05T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwBHoCc-ZmAi0WrlYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw45vUD8bnPPqzsB-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhRqlr7rkzkKtcWuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJqQu5OT0phwQc-_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc3lrC63EynRm_G814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwl7JPJUxMN8QMLUR94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyh5e_MBiSQrIADwzF4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCEsxJjwL1FVR_yEJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzvMKAl8NAyTiQvw954AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxs83NwUbXaukgZN-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]