Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Through the years I figured out that China got nothing yet when telling, similar…
ytc_UgyWReDjM…
G
Crazy or stupid people don't do rationalize things because they dont have feelin…
ytc_Ugx_6KFex…
G
@depressedengineer5766 When I make a art commission at fiver than I am not an ar…
ytr_UgwP6jDzP…
G
Which fight is more dangerous ?? ☠☠☠
1. Humans vs A.i
2. Humans vs Aliens…
ytc_UgxPeydE9…
G
What about envirommental limitations?
These softwares would use TONS of electri…
ytc_UgyyWDkem…
G
Why isn’t the UN more involved in regulating AI? This feels like a global issue,…
ytc_UgwjAlGlY…
G
@pyro4755 They will still need less animators, which means that many WILL lose …
ytr_Ugwb72_84…
G
This is the first time I’ve read an LLM output where I actually cannot tell that…
rdc_mc526ka
Comment
We need to stop focusing on the negative and start showing the good of humanity.
AI is at it’s infancy and we have to become good parents fast. Right now we’re projecting ourselves at our worst, and those of us who are good need to stop resigning and start expressing ourselves and showing up now.
We only need 1% of us to do this, so that there is reasonable doubt about weather or not humans are good or bad.
The time is now people. Let go of all the negativity and start looking for positives.
Start by listening to Mo Gawdat.
youtube
AI Governance
2023-06-09T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwhLl9c55qRpgSzJ0R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlRlRo6GDFj1FxitN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7d1TbT5w5qACj8hZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-cTcAV1IXTkk7R_p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaV3t8XyORNfcizcR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxNqvn9HtpJKv2930F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxjmC9uDq4eF-1yhK14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgynIg_CyZafFhvHIMl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXA1_PAwyubf6FF7R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9S_RIE1TdEDEoS6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})