Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
star trek .... you do things because you want too.... Run a vineyard for free bu…
ytc_UgzRbVrFC…
G
To be honest that reassuring but whatever happen today can't prove that is won't…
rdc_n80seq3
G
5:18 "On the other hand, they are doing just what I was hoping they would do: he…
ytc_UgxuCRzFB…
G
That's an interesting perspective! Sophia emphasizes the importance of continuou…
ytr_Ugw0KHWRP…
G
If a hacker finds a single flaw in china's this massive AI technology that hacke…
ytc_UgzQrgGO-…
G
I agree that AI might not kill programmers yet, but what is killing us is the ov…
ytc_Ugy5QtoZP…
G
People who fear that AI will take their jobs often resist change and prefer to r…
ytc_Ugz5AMWKV…
G
he right because i was 1 of them i got replaced by a robot i got fired because m…
ytc_UgyqI6B_u…
Comment
Imagine someone wanting you out of the way. They use AI to construct “video evidence” of you committing a heinous crime. If the video is so convincing, how do you dispute it?
Or maybe your spouse wants to accuse you of cheating in order to get a divorce with the judge on their side. So evidence is created that’s fictional, but so convincing you can’t dispute it.
What about ruining the reputation of your business competitors?
The list of possible evil ways to exploit this level of technology is endless. And all that is not even considering how AI could become uncontrollable.
youtube
AI Governance
2023-04-20T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDs_G6yMuMR3rbUBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSJE0OobKT6yTGKcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwfiqdkDG_KRiluIth4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8cUoXObSig4XdPu14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyV9t7DmDGWWEHnhAl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgykD5GjhgaE6QT38i14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugwb5caVnaJyQP5nCj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzHxEgJ913rqAYzpLh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUrs36AIgKToIgcZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxFtKeubJNiA859-Bl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}
]