Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
enjoy the physical world while you are in it. long after AI kills all humans and…
ytc_UgxpxB0wK…
G
so, we already use too much water... making poor decisions and wish to invent su…
ytc_Ugyeqj1IF…
G
Really interesting podcast. Really got struck by the question; what remains in t…
ytc_UgxxlX1Rz…
G
Yet she got no problem using the iphone and 100s of other technologies made in c…
ytc_Ugz6Xut4N…
G
I mean I did I not think it was ai I thought is was 🖕🏿…
ytc_Ugx1dYM07…
G
Why are you surprised is my question? Do you think all those nvidia AI chips are…
ytc_UgyzAhwJr…
G
I was with you until Gadwat said AI is a bigger threat than Climate Change,…
ytc_UgzsqQrz2…
G
Am sure it is more complex though what if A.I. could be encoded with something l…
ytc_UgyQDZnw_…
Comment
Here's my defense for this. I am a game developer and I am personally pretty good at writing good code. I avoid code smells, I keep things organized and structured, etc. I don't really have any other programmer friends, so it's not like I can just ask one for help, and stack exchange is not a very quick way to see if my code can be optimized.
BUT, there's a teeny tiny bit of impostor syndrome in my brain that tells me I'm not doing it right, but my code is so specific to my game that I would have to explain entire swathes of code just to see what could be improved upon, which would be costly. I recently tried out Chat-GPT, giving it my code and asking it if I could further improve on it, correcting its mistakes and assumptions, and in the process I get a better version of my code, or if not better, the exact same code.
Simply, having an AI that can remember a lot more than what I can store in my memory can be quite useful, and even though it could make HUGE mistakes, they are usually quite obvious.
youtube
AI Harm Incident
2024-06-07T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxFpWK3jWw03Op36ld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXwE3uy8hJs1LYIHp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuf_cY1ZPa2VTdSZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2-q5DXUlAEIxPg-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTmY8U4bYdsuwAY8J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwa1TLadJLzv08duBF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdINAIvywTKropXBh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUX_ajFPCKU9Kjl454AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyE9ll05e8ZU_THl414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzRz16bXMWom3UCNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]