Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JCJW101You’re confusing Chat GPT 3.5 with AI in 10 years. 75% of customer servi…
ytr_UgzKlbaMt…
G
Properly connected, recursively improving AI can achieve everything you just sai…
ytr_UgxPHUyCz…
G
Why did I train to be a fashion illustrator when technology is providing individ…
ytc_Ugza6buZm…
G
Anyone else having old nokia predictive text flashbacks every time chatbots get …
ytc_UgwYCrsfi…
G
This is exactly one of the things I was afraid was going to happen with AI on th…
ytc_UgwkCZ1x5…
G
It infuriates me how people don’t want to accept that generative ai could never …
ytc_UgwOvbMjK…
G
The Tech in the car that makes it possible for autonomous driving should have se…
ytc_UgygH3TNv…
G
Once you add an element of levity, I can't take you seriously. If I was an AI an…
ytc_UgzlpOzXx…
Comment
I'm very anxious about this AI. there seems to be a way to kneecap it by copyright, but I am against that and I don't want to pick a lesser of two evils.
My personal argument is that AI is data collection, and like all data collection, should be regulated and opt-in. You should not collect people's data without their permission; regardless if that data is their search history, biometrics, or their artstyle. although, I feel like I'm the only one who has this specific opinion on the subject. I've never seen it brought up anywhere as I doom-scroll through AI discussion nor think that my representatives would understand what I'm talking about if I ever contact them about this.
I'm scared.
youtube
2023-04-05T10:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFG6HqzjNiN_F7vNF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygiGKziD23x7ZfurF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVQ0DrtExbbq0QFdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxozk1zfFXZX7xenzZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFXPLVHq7qcOLVGgV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXteAYp2_B7CRZomd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXlNCaO2p5GipOXQp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy56OzTBPQK-5GPpml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwa6KXsqOqcUSSxQL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynKwvowfH0SscSunl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]