Two years later, Elena stood on a stage at a HR technology conference, looking out at hundreds of professionals who had gathered to hear her speak.
"When I first encountered the AI layoff system," she began, "I was told it was objective. Unbiased. Data-driven. But what I learned is that data is only as good as the humans who collect it, and algorithms are only as ethical as the people who design them."
She clicked to her next slide—a photo of the original list that had started everything. Forty-seven names. Forty-seven lives.
"This list was generated by a machine. But the decision to use it, to trust it blindly, to abdicate our human responsibility—that was a human choice. And it's a choice we make every day when we implement AI systems without oversight."
The audience was silent, attentive.
"I'm not here to tell you that AI has no place in HR. It can help us identify patterns, predict trends, and make more informed decisions. But it should never replace human judgment when it comes to people's livelihoods."
After her talk, a young HR manager approached her. "I just started at a company that's implementing an AI performance review system. I don't know what to do."
Elena smiled. "Ask questions. Understand how it works. And never be afraid to push back when something doesn't feel right."
That was the protocol she had developed—the Human Protocol. Not a rigid set of rules, but a framework for ensuring that technology serves people, not the other way around.
As Elena packed up her materials, she thought about the journey that had brought her here. The pink slip protocol had been designed to make layoffs efficient. Instead, it had sparked a movement to make them humane.
The algorithm was still running somewhere, probably. But so were countless other systems, each one an opportunity for someone to ask the right questions, to push for the right changes.
She smiled and headed to her next meeting. There was still work to do.
— To Be Continued —