Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
> And why the fuck can't I find the actual study in question anywhere?
Given…
rdc_d7kto5d
G
the labour market thing won't be the problem. remember we live in a democracy, o…
ytc_Ugx2WTvgm…
G
If ones were actually 'intelligent' in this world, this should've been something…
ytc_Ugwu8oYV3…
G
Basically, there is no point in looking at AI art because no thought process or …
ytc_UgzNFqTAp…
G
God shows us that art has everything put into it
Blood, sweat, time, tears, emot…
ytr_Ugx3TV1Ip…
G
I'm for it if it's gonna benefit the student. I noticed that many people comment…
ytc_Ugw5b0zWU…
G
I'm a techno geek, I graduated with a b.s. in tech, so I love the techno things.…
ytc_UgxXbVNst…
G
Why people hate on other people who take advantage of AI to make money intead of…
ytc_UgyVvaVKH…
Comment
Humanity's fall and weakness is a result of moral questions (fruit of the Tree Of Knowledge Of Good & Evil).
Since AI is a product of its fallen creator would it be a surprise for AI to have the same issues with moral questions.
Or, is it better to possess no moral compass at all?
Examples:
1 Nuclear technology can be harnessed for power or bombs. How will AI decide it is appropriate to use nuclear technology.
2 Acquisition of political power. Does AI know best? Can it be manipulated to manipulate other humans?
youtube
AI Governance
2023-04-18T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxTHWOFJZpBuA8Ii054AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzDtwsqHm6Y-CYeZOt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGSDyxom5_HHboM2V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwRDt3oI5K7s1kAsxZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgywwFhdPQh4lhhmVgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzeOC_hNpsyE4UmY4l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdrwT0frREfzAqIip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz08rB3NMf6RMDfgt94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyneZ-dorCTnhECkIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyYv7Avee4l_dHMS1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]