Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only thing I love about this is that AI is teaching from public data. What i…
ytc_UgxL77ute…
G
I use ai art pretty much just for my DND campaign. I don't claim it's mine, I do…
ytc_UgxgWXcUP…
G
Current state AI/LLMs do absolutely nothing until prompted..in order for it to b…
ytc_UgxV5BH5X…
G
Listen, i love niramis personality and her work but CHARCTER A.I.???!??! "EEWWWW…
ytc_Ugz5UTLtp…
G
Whew! I was thinking that if Werner Herzog dies we won't have a voiceover talent…
ytc_UgyeT2j2I…
G
Google has introduced a new AI model designed to help robots better understand a…
rdc_ogwlpq1
G
Sincerely, this is overblown. Base models are chaotic mostly because they just t…
ytc_UgzvDis2W…
G
MIND-READING, an AI-manipulated crime has evolved internationally the past years…
ytr_UgwUwAKsk…
Comment
Well done scientists. How about an enquiry into the entities approving such research from the get go? This is precisely like the problem atomic energy created. Unlike his flippant optimism expressed, we will not and do not collaborate internationally to minimise the risk of nuclear destruction. There is no evidence humans/nations will treat this technology any better. He did say the military are most interested! Also, it seems rather late in the day to be voicing concerns that AI poses an existential threat. I believe these scientists should be accountable. Knowing the risks they have released AI already into the public domain and now admit they cannot control it. Well done for introducing another existential threat to humanity. Very foolish, now we must contend with nuclear destruction, life ending climate change and uncontrollable AI.
youtube
AI Governance
2023-05-14T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyTwIYUBDb5I_rtHjR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwk0uuwjOC7lf5ke6B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_vHn-k_pb5_0y7px4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz6fCk7C4QVwJa1ja94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9P_ZlLPQqA7mmbyd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzwGJsCsgJ9zV46N8Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyoHqiStiywSpFf4aN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzaVorEQs87Hei4Cbh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsJ6-HeXrNlwGoeBN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHHTAXGz8RHlrQlrB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]