Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A human learns all things, not written in the DNA while living. So, a Robot that…
ytc_UgwatYYTP…
G
in reality yes 90 % Robot can not trust it's dangerous in a future .…
ytc_UgwBOSpRl…
G
That was the policy when I was in Kenya 15+ years ago. Not so much as a radical …
rdc_erb2ycj
G
I worked for a big tech company that has developed Gen AI to yes “replace” actua…
ytc_UgxeDyN3i…
G
I find it hard to trust anyone with vocal fry levels as high as the intelligence…
ytc_UgxMAzXW1…
G
Did the researchers predict that they will be replaced by AI in the next 2 years…
ytc_UgzMDGLek…
G
I am a software engineer. I use AI all the time to basically speed up my googlin…
ytc_Ugy21nFcj…
G
Robo taxis, taking tech jobs, creating movie scenes backgrounds for Tyler Perry,…
ytc_UgyEmrnZI…
Comment
Its obvious that the major problem with AI is who will control it. The super elite will control it and will be able to take propaganda to a level never seen before. They will be able to illustrate whatever narrative they want with AI videos that will 100% look real. They will be able to blackmail and incriminate people with AI voice recordings and videos and no one will be able to tell the difference.
youtube
AI Governance
2023-04-18T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxJapG0m3i_j-14D-Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw0MDIrdu13LWL9moB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwZsBPDbHjAkHPykJN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJFqQWvY_KPnPvECJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw_xETqovaLrlmUy9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyVDS6rvCpk76wCl594AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzrPSsvP2kwG56LfA54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzkzwjlbDhIKbipbY94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI9r_Pu7xKVCOsLah4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzSkL_I2x6atir6ZSh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]