4 Comments

I loved this quote!

"This limits the value of testing, because if you had the foresight to write a test for a particular case, then you probably had the foresight to make the code handle that case too. This makes conventional testing great for catching regressions, but really terrible at catching all the “unknown unknowns” that life, the universe, and your endlessly creative users will throw at you."

Helping me create UTs is my main usage of LLMs during coding (in addition to creating ORM models..)

I feel that it's indeed the best strategic usage of it's ability to think differently than you, and catch things you missed - with minimal risk for the company.

Expand full comment
Feb 26·edited Feb 26Liked by Leonardo Creed

Using the existing code instead of actual reasoning (be it human or automated) seems like a bad idea. Imagine my code has an unknown bug. This LLM would potentially write a test case that asserts that this bug stays in the code - even if someone or something discovers it later and tries to fix it - the generated unit test would fail and detecting that it's actually a wrongly generated unit test seems much more difficult than simply using human reasoning to write actually correct test cases. Does the paper mention anything about this risk?

Expand full comment