Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He eyes rolled ths man might be dead ans some of yall watching this like good en…
ytc_Ugxn2ovyi…
G
To play devils advocate in a situation where I'm detecting your cultural ideolog…
ytc_UgwI6sDK8…
G
AI could quite possibly launch a nuclear attack by mimicking being the president…
ytc_UgxIHbyxs…
G
Divine providence. In God we trust! He provides and deprives for the salvation…
ytc_UgyasBmuA…
G
@SahilSharma-mb7bg you are not looking complete picture here my friend. Compa…
ytr_UgzbIJDSl…
G
The ai filter breaker is me cause like idk how it happened but it did and my eye…
ytc_UgyLwBCJ5…
G
I consider myself very creative, where do I apply for a job at OpenAI? xd…
ytc_Ugxd-ug9l…
G
Judd Rosenblatt, CEO of Agency Enterprise Studio- who is that, and what is that?…
ytc_UgyzH9b8G…
Comment
In reality, the fear humans have of AI is the fear we have of each other & not trust one another.
Back in the late 19th & early 20th century, it was the mad scientist. In the 50's it was nuclear power & robots. Today it's aliens & AI. Notice a trend?
Not saying scientists shouldn't be smart about developing AI, w/safeguards & implementing asimovian principles to algorythns, only that AI-phobia & robophobia says more about ourselves than it does about some binary, invisible entity that can make decisions for itself.
youtube
AI Governance
2023-07-07T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCW0zGeaIwDcuOBLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEi5KoFIQphcm3toZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykYmK_kIuki2zvQrZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySLWphhOU8743AQo14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdvAoPf7MuN-4Rkrl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-A5iMC4DGrInsu4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyLtkPrpIorMZyHqr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzya7N-3fVx75W2mP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwScvCU8AUy7HNfaPl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyge8SvjZ0WXpeW3ER4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]