Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m somewhat working on writing a book. AI has helped me come up with character …
rdc_lz5oxg1
G
That's what I'm thinking. So many people hate how the use of AI is being portray…
ytr_UgychfWWB…
G
In my opinion, if we assume that a super-intelligent AI is also a product of hum…
ytc_Ugwcmfqcg…
G
With these weapons, it's hard to invade other countries. Seems like all war will…
ytc_Ugyv8DVGJ…
G
I love the “serious” sounding music by fake news CNN 🤣 democrats are a bigger jo…
ytc_Ugysz4xBo…
G
Using digital devices isn’t the same as using AI, in what world does that make s…
ytc_UgyssTLDZ…
G
IT WAS YOU I SAW IN A RANDOM INSTAGRAM REEL DEVOURING CANNED FOOD MONTHS AGO! I …
ytc_UgzMZ21Bc…
G
They are conscious he knows it he's trying to think of a lie not coming it's her…
ytc_Ugzu6IfH6…
Comment
The issue is now is you physically cant stop AI development. If you develop rapidly you risk the downfall of your own society, but if you dont develop rapidly and say you leave development to countries like North Korea and Iran.. you risk being left behind understanding the cutting edge of AI tech and also how to counter it, in doing so also putting your own state at serious risk. Its going to almost turn into an AI weapons race that you know doing so dooms you, but not doing so also dooms you. Which is why many analysts think we are all fucked basically. As one AI scientist i listened to recently put it, the Neanderthals didnt realise the threat that we were to them until it was too late. Sadly its probably already too late, as bad actors of this world will not set or keep to guidelines and boundaries on this tech. And privately major powers around the world will also secretly continue this development anyway.. regardless of what is said publicly.
youtube
AI Governance
2023-05-02T12:2…
♥ 90
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx2WGvgJtQ6P_KdZl94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyil9DdiAU6zEmu-2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgziwWBNSPP026Pvn9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQzvUIZuJ2HiOwivN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwf6jblSVzlOfrM_aR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxfae9-LoraSPWPl0x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxE5Zc-8AbqCwGok4V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyVF00N2JyBX4YP3Lt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0bw-uXGDfo_7q8Wh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyxjQ2d43Yp8kOc8M54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]