Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In July 2023, the chat GPT's answer to the requirement of breaking free Open AI …
ytc_UgzXVjExA…
G
I can see a 68 or 52 hour work week being beneficial if you get the GDP results,…
rdc_dv0j432
G
I love your voice
And yeah ai art and human art can never be compared ever
Art…
ytc_UgxwQi8nK…
G
a lot of people grew up in racist homes but had to pretend to be progressive... …
ytc_UgzOi86OE…
G
If we’re going to be realistic about it, using ai to generate a reference is no …
ytr_UgyKCtClx…
G
He was murdered there's no two ways about it the interview with Open A.i owner h…
ytc_UgwFo4lkW…
G
we have spend decades writing dark fantasy, cyberpunk dystopias, and a pessimest…
ytc_UgwChauCt…
G
This is the kind of stuff my grandchildren will have the displeasure of studying…
ytc_UgyM6v4a4…
Comment
Those on the pro side already lose based on it being a slippery slope fallacy. Classic way to prove this is by using examples of past technology. Who's to say someone with a fully automatic weapon will not just go around constantly killing? Or a rocket launcher? Or a nuke? Because we have systems in place that guard rail against these actions. If it's possible for there to be an existential threat in the pros perspective, then there is equally just enough chance for humans to develop defensive systems around AI. The pros perspective essentially is suppressing technology by default.
youtube
AI Governance
2025-02-16T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUGgUlM0sPlrHlY5B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXWgnfVXMyJcEoaAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy19qng_0mU1MAslHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqwOM9j68xGe2aCDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-XTIElT6SaBW8e1p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxLKvjVULwWpr_9zLB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9wRiF9AtUw3XsMs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-YUD7igKvxKOso-Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSIcgpqCmm1DQ2PDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7pjPwQp36kXUqWMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]