Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't trust Sam Altman and don't like OpenAI. That said, adding restrictions a…
ytc_UgykaOrex…
G
I hope one day AI creates most deadly extremely adaptive virus that will wipe ou…
ytc_UgzPWceIV…
G
Not okay to call out skin color. Sorry to see jobs lost to AI. Wish you well.…
ytc_Ugy-yaR_7…
G
These ai experiment with ai were programmed with stupidity, if ai was programmed…
ytc_UgztwTVY1…
G
what's truly disgusting about ai "art" is that it takes from REAL PEOPLE, even a…
ytc_UgwkOyhEQ…
G
@phobiability6400 I beg for you to research the properties or code of AI. I thi…
ytr_UgxpNCX0Z…
G
AI' doesn't exist. They are just computer programs that were, get this - PROGRAM…
ytc_UgzcdJ74z…
G
Tesla robotaxis trials are also geofenced. It’s to limit to area well scanned to…
ytr_UgxkpJRiN…
Comment
I have a question/am seeking advice. I want to start by saying, I now know AI is wrong and will no longer use it. I’m a person who makes mistake, I’m looking to make it right. I used AI as a sounding board, asking if my ideas would work (I didn’t know that writing groups were a thing and I have since joined one) it was more about curing my imposter syndrome than making anything new. Now; my first draft (that I wrote entirely) is done. But since I used it to validate myself, how do I move on? Is this book I can continue working on and publish? Or because I started using it as a sounding board, do I need to throw out the whole project (which would break my heart)
youtube
2025-06-27T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSLJe6eYx7Zb1kI2l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcYkG_hpBL44Md7cV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoOwY7e1vuMdDy0gR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxS_Jsre_2lX7nD-Ud4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzWjJqw5GJj2CzdYrx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw58VDZoEd63XqLeBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx8WnjpgLhGnttb49Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCb_c4Mee6KR4lDlx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEZRtcd_uEj1crGKR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugynk3668n3mP1Z7Htt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]