Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many people show current most modern AIs as examples of how so-called racist and…
ytc_UgwHD0Phb…
G
I personally train AI models off of my own data for games, using commercial mode…
ytc_UgydiXBl9…
G
Regardless of your views on this, regardless of whether or not LaMDA is in fact …
ytc_UgyJpJIhV…
G
Anyone else saying "thank you" to ChatGPT for the given information, in the hope…
ytc_Ugzr9ErAN…
G
Imagine AI in the USA it will create a lot of millionaires and enriching a preda…
ytc_UgwKGgmhT…
G
@SweatyKnees Why pay when I can use Ai to get a much better looking version of …
ytr_Ugxw-_PeD…
G
@shroomer8294 i did not redefine anything you dumbass. Go open how a dataset is …
ytr_UgzYIWWye…
G
The danger is NOT how advanced A.I. will become, the danger is how much trust pe…
ytc_UgzAteiTt…
Comment
Leveraging a Large Language Model (LLM) as a judicial reference point prior to generating output is a sound strategy. This involves deconstructing the primary query into sub-questions and then utilizing the LLM as a reference, supported by validated sources to substantiate the final output. Employing weighted scales to assign confidence scores to specific values further enhances the process. The primary challenge lies in the immediacy of output generation; however, a more favorable outcome can often be achieved by allowing for additional processing time. Maybe
youtube
AI Responsibility
2026-03-25T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxtaMqm9Yhe0eOBEjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxGcI8CwYwqDCHTHyx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwyP5kKXpSN84kV01J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxO9bipVDapxF8ea9V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxU17sIruI8CaFjiUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXsQJnLzwmjsfHZZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyuB13aSvot8bg35XJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2FyvdhO1814mm5sJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwDaUutZXic3I6a1sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiPjnbf8O_PNRibI14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]