Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some amount of anthropomorphic language is necessary to communicate to a lay aud…
ytr_UgyevG6T0…
G
If you simply draw a stick figure, you're already better at art than every singl…
ytc_Ugz554RHA…
G
Social credit scoring, surveillance, robotics, military, intelligence, indoctrin…
ytc_UgyztP9pd…
G
A little annoying this only focused on the negative side of AI, and then shockin…
ytc_UgwZoZhYt…
G
@syzygy4669 I would argue that they saying that they are or look same is wrong. …
ytr_Ugyzee_4E…
G
Hold on wait- you just said “if you leave israel alone peace will come” 😭😭😭
Du…
ytc_UgxOTi5FD…
G
Stealing your information, and left-wing fanatical political bias: how Google us…
ytc_Ugz8lyj0e…
G
Yall think they will take all the jobs? No the ai will calculate what jobs to ke…
ytc_UgzIEzEIJ…
Comment
Roman is predicting the outcomes from only one line of thinking - assuming that every other part of society and economy will remain the same as AI development is progressing. If anything it has potential to free humanity from paid work but it also has a potential to enslave everyone on the planet. Governments will be forced to look into universal basic income in the transition period until the economy and definition of value completely change. Ultimately any AI has only one problem to solve - a reliable source of energy, simply because this is a matter of its survival. Humans might or might not benefit from the optimal solution in the environment / boundary conditions that we created before the AI started looking for solutions.
youtube
AI Governance
2025-09-06T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbVJN7c1tFjbVQbuN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsokEwe48bMC6MlE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxlqkwy7dgtTqboIIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW_rJ5FZZFA5q8U3d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQI1b5_DrMFLyGQpF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugypddgr3BKW6CD586d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyS8Er50IqQ1SAl1ip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw5ieEMwIv9i8MqvGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxfAfTk97dP1YPlT054AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOpkzHHGE_6X1Osrp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]