Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok I just did this and wow! I really didn’t expect anything of substance. It’s n…
rdc_mvxirqb
G
If AI decides to wipe out humanity, I totally understand, cause we are evil asf …
ytc_UgzpR-ksv…
G
In a strange way, these cases illustrate that the LLMs have the ability to train…
ytc_UgzcFwhs8…
G
@guillaumelagueyte1019 the limits we currently have on free speech are fine, but…
ytr_UgzfVDyhv…
G
I am in bca major and i tell u if u don't know how to code u will never understa…
ytc_Ugwb8UEDV…
G
Why would the AI first say that humans were absolute garbage and then at the sam…
ytc_UgwP_m4oj…
G
This isn't his argument though, this argument is from a.i. he is just trying to …
ytr_Ugym-ZPNL…
G
What scares me most isn’t AI itself — it’s the corporations that own it. Whoever…
ytc_UgzMvZYXU…
Comment
When I was at school in Berkeley, I was given an invite to protest the Second Bush administration, which itself looked like a fake $1million bill... but, just in case it wasn't obvious enough to some recipients that it was CLEARLY FAKE (from his dumb face on it to scenes from burning oil refineries and Abu Ghraib depicted in the background), it also had printed on it, "CONTAINS ZERO ACTUAL VALUE. THIS IS JUST A PROTEST INVITATION, NOT ACTUAL MONEY. DO NOT USE AS MONEY. SERIOUSLY."
So yes, among other security measures, I would be in favor of AI being strictly regulated so that anything they are asked to create comes with some automatic signifier to denote where it came from to prevent plagiarism or other fakery.
youtube
AI Governance
2023-05-10T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwyLHQCay70QDDlvYJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6yfrx37x8rxyFZE94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyzr4T8NpWpVqbg5dd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzGjF197jWiwKHx8V14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8Eixnm5ZWKmZ8Ird4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1jfikHgjyPa7smd14AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx9hYFTwlLcNytxWMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6yjVfgPbrzLoiPyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywbEUmueu_6CFBE714AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzA5TRet0cyHLdoAtR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]