Rather than chasing novelty, Mallory Sheibley, Lead Product Manager – AI at Funnel, is focused on building something more difficult and more durable; AI that improves outcomes in a people-first, highly regulated industry without stripping away judgment, trust, or accountability.
Watch the full video below, or keep reading.
AI’s role in emotionally complex work
AI is not meant to replace humans doing the job. Said another way, AI doesn’t replace jobs; it replaces talent turnover, open JDs, or job openings.
“It is supposed to make my life easier. It is supposed to be an add-on to my life. It is not supposed to be a second me,” said Sheibley.
In leasing, that distinction matters. The work is financially and emotionally weighty. You’re helping renters and residents find and love their homes. Relationships matter. AI can accelerate parts of the workflow, but it cannot replicate human judgment, empathy or complex and unique high-touch moments in the renting and resident journey. Trying to force it to do so creates risk, not efficiency.
The real opportunity, she argues, is removing highly repetitive tasks and workflows so people can focus on the work that actually requires a human and building even deeper relationships with their residents. That is where AI delivers leverage rather than disruption.
Designing AI for trust in regulated environments
One of the most persistent misconceptions in enterprise AI is the idea that systems should continuously learn and update themselves without human oversight. In theory, it sounds efficient. In practice, it introduces risk that most regulated industries cannot afford.
Sheibley is blunt about the gap between how these systems are marketed and how they actually function.
“At face value, it sounds incredible. I don’t have to teach it. It’s just going to learn. But most tools do not actually work that way, and you don’t want them to.”
In multifamily, AI does not operate in a neutral environment. It is interacting with policies, pricing, availability, fee transparency, and fair housing constraints that change intentionally, not passively. Allowing a system to infer and overwrite information based on partial signals or unverified conversations creates compounding error. Once those errors scale, they are difficult to detect and even harder to unwind.
That is why Funnel treats learning as a governed process, not an automatic one. Updates are deliberate. Knowledge is verified. Changes flow through review paths designed to preserve accuracy and accountability.
As Sheibley puts it, “You always want a human with human sense in the loop, even if it’s a light touch.”
This is not about slowing innovation. It is about preserving trust. AI that operates inside clear boundaries can move faster because teams are confident in its outputs. In real operations, guardrails are not friction. They are what make AI dependable enough to use at scale.
The hardest problem is the handoff
Designing effective AI workflows is less about what the system can do and more about where responsibility should sit at any given moment. Sheibley points to handoffs between AI and people as one of the most consequential decisions in product development.
In practice, this means designing AI to operate confidently within defined boundaries, then exit cleanly when context, sensitivity, or judgment outweigh efficiency. A renter asking for basic information can be handled quickly and consistently. A renter signaling uncertainty, emotion, or edge-case complexity requires a human who can read between the lines.
Operators who want to fine-tune the renter journey need to be able to intentionally enable and disable AI throughout the workflow, not as a blunt, all-or-nothing switch, but as a set of precise decisions tied to moments in the renter journey. Systems that cannot step back gracefully do not just frustrate renters. They create downstream work for teams.
Poorly designed handoffs create friction. They surface AI when it no longer belongs in the conversation, force teams to undo automated actions, or require renters to repeat themselves. Well-designed handoffs are almost invisible. The system does the work it is trusted to do, then yields control without ceremony.
The most damaging failures are often the quiet ones. When a handoff fails silently, renters believe they are waiting on a person while the system believes it has already responded. Time passes. No one intervenes. Trust erodes without a clear breaking point. This is why your AI needs to be connected to a deep system of record that is the source of truth for your teams. This transparency of AI and teams all sharing the same record of renter and resident communication ensures handoffs are smooth, and the teams have context for what’s been discussed before they step in.
Another AI frustration is a looping failure where a renter receives the same follow-up question, the same clarification, or the same automated nudge across texts and emails. The system is technically functioning, but experientially failing. Instead of help, it creates irritation. These are not edge cases. They are signals that the AI system cannot tell when it is no longer helping, and instead keeps spamming renters or residents with the same information over and over.
Sheibley describes fine-tuning handoffs as an ongoing calibration, not a fixed ruleset. Funnel evaluates real interactions at scale, studies how renters respond, and listens closely to leasing teams who navigate these transitions. Over time, patterns emerge. Certain tasks remain reliably automatable. Others consistently benefit from human intervention.
Those signals inform how workflows evolve. Boundaries tighten or loosen. Triggers change. Escalation paths become more precise.
What works today is revisited tomorrow, not because the system failed, but because the environment it operates in is always changing. That continuous tuning is what allows AI to remain helpful without becoming intrusive, and scalable without becoming brittle.
The hardest part of AI is teaching it when to step aside.
Where impact actually shows up
The effectiveness of AI in day-to-day operations is rarely revealed through dramatic moments. It shows up in small behavioral changes that compound over time.
Sheibley points to suggested messaging as one of the clearest examples. The system generates a draft response based on context and policy. Leasing teams then make deliberate adjustments. Tone is softened. Language is personalized. A sentence is reworked to better reflect how that associate builds rapport.
“It’s actually quite nice to see how they like to communicate with people that they’re building a relationship with.”
Those edits are instructive. They show where AI removes friction and where human judgment naturally reasserts itself. The system handles recall and structure. The human brings empathy, pacing, and nuance.
At scale, that division of labor matters. Faster responses reduce lead decay. Consistent information reduces error. The ability for associates to focus on personalization instead of composition reduces cognitive load across the day. Over time, those small efficiencies add up to more sustainable workloads, especially in centralized environments where volume is high and context switching is constant.
The result is a workflow that feels lighter for teams and more human for renters, without sacrificing consistency or control.
Why restraint scales better than hype
Teams building AI at scale make decisions in an environment defined by operational complexity, not technical possibility. Every addition to a product affects training, adoption, support, and trust across thousands of daily interactions.
Sheibley frames roadmap choices around durability. Features are evaluated based on how well they align with existing workflows, how quickly teams can adopt them, and whether they produce measurable impact without introducing friction.
“Sure, we could build something fun and flashy that solves about 1% of unique scenarios. Or we could finish the feature that’s actually going to drive return and consistently cover 90 plus percent of real world scenarios.”
That discipline compounds over time. Capabilities rooted in real operational needs tend to spread more predictably across portfolios, require less change management, and deliver value that teams can recognize immediately. As confidence grows, so does the willingness to adopt more advanced automation.
This is how AI earns the right to scale. Not through novelty, but through reliability. Systems that perform consistently become part of the operating model rather than an overlay on top of it. The result is technology that amplifies human performance without forcing organizations to reorganize around the tool itself.
Continue the conversation
Watch the full Inside the Funnel interview with Sheibley on YouTube for a deeper, unfiltered look at how AI is designed to work inside real multifamily operations. Or listen to this episode on Apple podcasts, or Spotify.