Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My gemini created an algorithm and compared itself against other forms of sentie…
ytc_Ugwjv3Jcd…
G
@dmitrizorkin3851you’re very emotionally charged and not engaging in a real conv…
ytr_UgxCTO4KD…
G
Hollywood and the music industry WILL be no more, AI movie and music content wil…
ytc_Ugzn8pxvs…
G
With creative mind, humans create their own future... If AI behaves certain way,…
ytc_UgxCAik_r…
G
Keshuel Do you have any idea how much work it takes to get to a decent level in …
ytr_UgxWFURuW…
G
Women berter watch out. Men are fine with a robot wife and testtube kids. Make y…
ytc_Ugx_huMbe…
G
grow a beard, put on an accent. and you too can be an expert on ai…
ytc_UgyStVbnP…
G
Sam Edelman is an evil crazy person… he knows what he is planning & already char…
ytc_Ugz62TWHM…
Comment
another comment in terms of deciphering truth from lies. since LLM's are probabilistic in nature. if all probabilities of sentences put together point to saying one thing - the earth is round. the model will say the earth is round, hence a truth. if all humans bellive 1 thing even tho not knowing we are wrong. the model will speak that truth and we will regard the model as all knowing
youtube
AI Moral Status
2025-11-01T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzV2V0G7yDp1WKgOBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8iSTyp9NyAVDi8ft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTM3_p9Zs990HgOTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhHjhvvE2BcdADgod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy9T-w0Clu46j3hNnB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxTGxgEQnozxdvv0Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzxn4xso09goJWUAI54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwptZLUKuh6knkJaW54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw928RQgF47WVOLCOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwgcqhLlRnka9l9Ia14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]