Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are going to be accidents no matter what's driving because people are frig…
ytc_UgxnxFg4P…
G
All I know is that as soon as the big companies get their AI media. Then the law…
ytc_UgxnWBYq1…
G
@mollymaguire1391 I don't know if I could explain it all in a comment. Quality …
ytr_UgzBXlOh9…
G
Wait so you think the dude might be a robot too?!! This is scarier than we even …
ytr_UgzdCdG6Q…
G
AI does not pay taxes, does not pay into Social Security or Medicare or support …
ytc_UgxJ0tWzX…
G
Ai isnt taking jobs as we know tgey are being cut because tgey are cutting cost…
ytc_UgxIRSjst…
G
It is easy. To prevent bad people to make a bad AI all good people should join t…
ytc_UgxviHNaZ…
G
AI is the biggest dangerous for our life plzz don't share photos and anather vid…
ytc_UgwKAmbgX…
Comment
I like the argument of large misaligned social structures in the debate of AI safety: humanity created governments, corporate entities and other structures that are not really aligned with human values and they are very difficult to control. Growing food and drug industries resulted in epidemy of obesity and deseases caused by it. Governments and finantial systems resulted in huge social inequalities. These structures are somewhat similar to AI in the sense that they are larger and smarter than every individual human and at the same time they are "alien" to us as they don't have emotions and think differently. These structures bring us a lot of good but also a lot of suffering. AI will likely be yet another entity of this kind.
youtube
AI Governance
2023-06-26T16:4…
♥ 99
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-xaGPm3D8c0ixwBJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpCXSpz_jjcNwXPVZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyWT2UpaskQUMAayqZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJMOcfjVCpVbEKq7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw038Sm5-hO9QbDRQt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-SmocC08gAzk5kgp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA8QT364rRklCbe8h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx9jAOzKSBkQ2GH8K54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwt7LuF1KC8pyqZBbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF1j0N3Xrp1OOO34N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]