Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The big problem is blue collar jobs and the skilled trades are at least 95% men …
ytc_Ugwwz-Oao…
G
this is unsurprising. AI learn from what WE do to each other (scraping ALL corne…
ytc_Ugyp9kWkD…
G
They have trained AI to lie.. The world is already distopean. Peopled not unders…
ytc_UgwXG1B4z…
G
So, in order safeguard us from the dangers of AI, Elon helped create OpenAI and …
ytc_UgyT23Mzc…
G
So they push DEI and AI... Why not having people with merit select people based …
ytc_UgxdoQ4EQ…
G
@caryonplays9024 you mean those kind of programs where you draw a line and it ki…
ytr_UgycDFgCo…
G
I am also personally very worried about the environmental effects that AI create…
ytc_UgzTmSTkM…
G
When AI can climb up stairs in a 6-story walk-up and unclog the toilet, I’ll wor…
ytc_UgzAKFV34…
Comment
Harmful. A.I should be used as a regulated and law abiding tool as a means for human intellectual aid. A.I has already proven its capabilities for rapid memory recall, data point predictions using LSTM and Machine Learning. With the help of A.I, humanity has the capability of advancing beyond our wildest dreams at an exponential rate.
youtube
2024-03-04T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwepttiHeOB1qztBGR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7V68XyvxsNpp9t714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzxcPibR6RSgWulnBZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxb7HTrFLAG8bIQmfF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw68jGZmkbue0Mwaut4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygViCsMvqTWCxXceV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd_Ywx880HDV9wek94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynYXZlDBTD6SrW1od4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN3YJFoz_Qw5DisUV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzZ6foBmYWf6CnBZgp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]