Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree with this 100%, as an AI Dev. And I am sorry that there are people that …
ytr_UgwaKU6kW…
G
You know when electricity first came out there were people trying to stop the pr…
ytc_UgzdL6hno…
G
It's just bad we don't need ai rn we just don't like we don't need flying cars o…
ytc_UgxqjKxCX…
G
- " Press 1 for English, 2 for Spanish... " (A.I.)
Well, that's my experience s…
ytc_UgyvyELDF…
G
I love how ai fartits keep telling it’s better for the people who are disabled, …
ytc_Ugx4AgIvy…
G
Don't get it twisted! Digital is Alien Technology. It's Demonic. Anti-Matter, Go…
ytc_UgzSgnebT…
G
Or your " smart " driverless "car drives itself away due to non payment or even …
ytr_Ugyne-Ksr…
G
There's insufficient regulation in place now to protect humanity from the develo…
ytc_UgwTQkIKO…
Comment
Not could, will. AI will be able to parse thru all human knowledge in nanoseconds and make decisions we won’t even be able to predict, that will be better.
It’s like the saying goes: if we don’t know our history we are destined to repeat it. Well, AI will be able to automatically know all of history without having to learn or remember anything so it could potentially be wiser than what we would decide.
This would leave the only difference between AI and human decision-making skills, our emotional sensitivities or feelings. We would have little grounds to argue towards our feelings because feelings are subjective which would mean our decision-making could be deemed inferior. Not a good place to be.
youtube
AI Governance
2023-05-04T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz5AnuzYKaUXjUUQq54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZnGxoKz_BqOTTyXh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLqgdyJxh70Mndpxp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8-fGQb0SMcsNh1Ld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyxm_YFhq5j5N5m6Hx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyVbsIVKRkAr9tm1iV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-gE1H4lq12rhAKyt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxSOURQ28SMPSRMTm94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxr55fH6i3PsfKm44Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOd3njWKGOaegVbXJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]