There are dozens of terms flying around for AI Workers. Learn from leaders from Copy.ai, Relevance AI, and Tribble as they dive into each term's meaning and its ideal use case.
0:00
All right. Thank you, everyone, for joining us here at AI Workforce Summit.
0:04
I'm really excited for this session. We're going to be talking about the
0:07
differences between
0:08
like co-pilots and agents and workers and really digging into the different use
0:12
cases.
0:12
And we have a great panel of speakers here today, which I'm going to let them
0:16
introduce themselves.
0:17
And Daniel from Relavance AI, if you would like to go first, that'd be amazing.
0:21
Absolutely. Thanks, Sarah. Thanks for having us and looking forward to this
0:24
conversation today.
0:26
So I'm Daniel. I'm going to go first as a Relavance and we're an AI workforce
0:29
platform.
0:30
Simply put, we enable customers to build and deploy AI agents that complete end
0:35
-to-end tasks
0:36
autonomously on autopilot. Our goal is to ensure that teams are only limited by
0:41
their ideas,
0:42
and we don't want them to be limited by their size. So we give them this extra
0:46
lever that they
0:47
can pull when they want to do more with less, especially something that's very
0:50
top of mind for
0:50
people today. We achieve this specifically with a horizontal platform, because
0:55
just like hiring
0:56
employers, we really believe that every agent needs to be customized and
1:00
configured for that
1:01
specific organization, for that specific world. And some of our flagship use
1:05
cases include a
1:06
BDR agent, Bosch, and sales, helping automate the type of funnel, whether it's
1:09
outbound,
1:10
rimbound, we do stuff on support, we do stuff on marketing as well. But again,
1:14
the key piece there
1:15
is that being a workforce platform, we want that customization, we want that
1:18
configuration,
1:19
just like when you hire someone, you don't want to bring them on and then to do
1:22
the things from
1:23
their old company, you want them to do things at your company. And so that's a
1:26
really crucial
1:26
aspect of our platform. I really love that analogy of bringing someone on. You
1:29
don't want them to
1:30
have the old bad habits from your old company. You want them to be doing the
1:33
things you want
1:33
them to your organization. So I'm super excited to dig into some of your more
1:36
agent-specific stuff
1:37
here in a minute. Next up, Kyle from CopyAI. I'm so excited you're here. We've
1:41
chatted a few
1:42
times and really excited to talk about AI workers and agents with you. Always a
1:46
pleasure, Sarah.
1:47
Thanks again for having me. I'm Kyle. I lead the marketing team here at CopyAI.
1:51
And we're a
1:51
go-to-market AI platform, very similar in many ways to what Daniel just talked
1:54
through, helping
1:55
companies codify their best practices in what we call AI workflows and then
2:00
extend those workflows
2:01
across their entire organization so that you can fully automate some things,
2:06
really streamline
2:07
other things, and just get tremendous business value out of AI. And we're
2:10
specifically focused
2:11
on going super deep on AI-forgo-to-market teams. And the way to think about
2:16
this is open up your
2:18
marketing playbook, SCR, sales, C.S., operations playbook to any page. The
2:22
pages in those playbook
2:23
search processes and those processes right now are executed by humans. Well,
2:27
what if you could
2:28
execute those processes in some form partially or fully with AI and the
2:32
business value that
2:33
you get from that? And that's what we help our customers do. Amazing. And then
2:37
last, obviously,
2:38
most certainly not least, Sunil is joining us from Tribble. Sunil, thank you so
2:41
much for being here.
2:42
Thanks for having us, Sarah. And hello, everyone. Sunil Rao and the CEO of Trib
2:47
ble.
2:47
And similar to the first gentleman, the company we're building really is
2:52
focusing in on building
2:53
agent, but really going deep into workflows of a specific role within a company
2:57
. So even within
2:58
go-to-market, specifically looking at roles like sales engineers, proposal
3:02
managers,
3:02
and really digging into the workflows of those individuals and starting to
3:06
automate percentage
3:07
of those workloads as an agent, as a digital teammate that you can hit up to
3:11
get specific tasks
3:12
on the team. So we're really taking a more personified approach, but we really
3:16
believe going
3:17
deeper into that workflow allows us to be more effective at those specific
3:20
tasks.
3:21
Amazing. So even in those introductions, we've heard a lot of different
3:25
terminology used.
3:26
There was workers, there's agents, there's co-pilots. We've heard that term
3:30
before. So what I want to
3:32
kick this session off with is for everyone here that's listening, where are
3:36
there differences in
3:37
these terminologies? Is a co-pilot different than an agent? Is different than a
3:40
worker?
3:40
Or are they synonymous? Are they interchangeable? So I'd love to just get all
3:44
of your takes on these
3:44
different terms that we're hearing a lot in the market right now and how we
3:48
should be thinking
3:49
about them. So Sunil, I would love to hear your take first on how you think
3:52
about these different
3:53
terms that we're hearing a lot in the market right now. Yeah, I'll give you the
3:56
perspective that
3:57
we as Tribble had, and even our product has evolved quite a bit in the course
4:02
of the last year. I
4:02
think Sarah will be last spoke. We had a version of the product out. We were
4:06
very focused on the
4:08
RFP workflow within the role of an SC. Very specifically, you're getting RFP'd
4:13
by customers.
4:13
Typically, you need to have some deep product knowledge, functional knowledge
4:16
to answer those
4:17
questions. So the product we first put out in the market was actually a Chrome
4:20
extension. It was a
4:21
co-pilot. So it's actually assisting the human who's doing the job in a web
4:24
browser or within an
4:26
application helping them complete that task. And they're kind of ferrying
4:29
information over.
4:30
It's going out doing some work coming back, getting the information and their
4:33
porty.
4:33
When we think about co-pilot, we kind of talk about it that way where it's an
4:38
assist to the
4:38
human. When we talk about agent, we actually look at it as, hey, can we hand
4:42
over some of this work
4:43
directly to the Tribble agent? And in this exact workflow for the RFP, we
4:48
actually allow the users
4:49
to send a file over via the conversational platform. And Tribble just takes it
4:54
off like an
4:54
intern, what it takes a first stab at it, figures out whatever problems it runs
4:57
into,
4:58
it goes back to the completed file with the answers in the right place. So when
5:01
we think agent,
5:01
it's more end to end workflows, I would say. We think co-pilot, it's more of an
5:05
intercept
5:05
into the existing workflow. Very interesting, Kyle. Do you agree with that? Do
5:10
you see any
5:10
differences within either your own product or sort of like what you're seeing
5:13
on the market?
5:14
Yeah, it's on the whole, I would say, Sunil, we're more or less on the same
5:17
page. I would say we
5:18
take a slightly different approach where we think about agents as performing a
5:22
multi-step
5:23
act. And so for as an example, as part of our account intelligence workflow
5:28
that builds a
5:29
really comprehensive account plan for sales teams, there is a agent embedded in
5:33
that workflow,
5:34
who is a research agent who has been trained on how to do all the things that
5:39
an intern or an
5:40
offshore company or an SDR in house would do to understand what's going on
5:44
inside of this company,
5:45
run the 50 different Google searches, search these different trade applications
5:49
, do all of these
5:50
things that a research agent would do. Now that's one step in the workflow.
5:55
There are other steps
5:56
that are necessary to create a full account plan. So agents, I think, can be
6:01
end to end. They could
6:02
be a sub component of an entire process, but they're, I think, multi-step is
6:06
the right way to think
6:07
about it. Co-pilot to me, especially the co-pilot that most people are familiar
6:11
with, like the chat
6:12
GPT type co-pilot experience is more one off task. And frankly, is a pretty
6:18
frustrating experience
6:20
for a lot of, I'll speak specifically about go to market because that's why I
6:23
spend the most time on
6:24
pretty frustrating for go to market teams when their CEO is like, our AI
6:28
strategy is everybody gets
6:29
a chat GPT license. And they're like, what? I have to go learn how to prompt.
6:35
And so I've seen a lot
6:36
of people, you know, a lot of folks will say that generative AI right now is in
6:39
the quote unquote
6:40
trough of disillusionment where we were so excited about the transformational
6:45
value that you're going
6:46
to get from these companies. And then you went and got a chat GPT enterprise
6:49
license. And now you're
6:50
in this disillusioned trough where you're like, I don't know how to do this.
6:54
And so that kind of
6:55
co-pilot experience that's super general and requires everybody inside of a
6:59
company to develop
7:00
a new set of skills, it ain't going to happen. Like it's just not going to
7:04
happen. So I think
7:05
everything that Daniel and Sunil have talked about so far is about how can you
7:08
go much more
7:09
sophisticated than that and develop an experience if it's a co-pilot or agent
7:13
or whatever experience
7:14
that's much more less general purpose and much more focus on solving something
7:18
specific for a
7:19
specific person team, department or company. And that to me is what a real
7:24
agent or a real
7:25
co-pilot will do to deliver real business value. I like that. And you mentioned
7:29
the trough of
7:29
disillusionment. I think it's crazy. We're in this, we're calling it like the
7:32
AI era here
7:33
qualified. And it's crazy that I think we're in like different, I've never been
7:37
in an era where
7:37
they're in like different phases. Like the AI worker one seems like it's at
7:40
like peak, but like
7:41
some of this AI co-pilots like the trough of disillusionment. Like I feel like
7:44
it's like
7:44
then all different places, which is wild. Daniel, you guys obviously have
7:49
agents. I'm curious to
7:50
get your take on this as far as like agents versus co-pilots and that
7:53
terminology that's being used.
7:54
Yeah, I think what Sunil and Kyle covered is really helpful. The lens we have
7:59
is like one step
8:00
back a little bit where like we actually look at everything through this notion
8:04
of a co-pilot or
8:05
an autopilot. I think for most buyers out there in the market right now, if you
8:09
start Google in one
8:10
of those terms like AI agents, AI workers, employees, you'll get co-pilots, you
8:13
'll get them all mixed up.
8:15
So I think if you when you're evaluating software, right, you're looking for
8:18
solutions in this,
8:21
if you can start thinking about them, is this like a co-pilot or an autopilot?
8:23
It's a really
8:23
helpful way, I think, to distinguish the kinds of products out there. And like
8:26
Kyle was mentioning,
8:27
we really believe that autopilots are places where you can take these, make
8:31
dynamic decisions,
8:32
multi-steps, it's not a static workflow. The days of RPA where you have a if
8:35
this and that,
8:36
that's gone now, right, with agents you can actually have it on autopilot,
8:40
which means it can make
8:41
dynamic decisions based on the scenario, it has in front of it. And we have a
8:44
bit of a controversial
8:46
opinion on this at relevance. We actually think that for co-pilots, it's a bit
8:49
of a short term
8:50
trend. I think for every single vertical or functional role that autopilot gets
8:54
good enough,
8:55
people are going to prefer that because simply it lifts more of the effort from
8:59
your team
8:59
to the autopilot, unless your team actually not being encumbered by an
9:03
assistant and actually
9:04
have something they can delegate to. You don't see people in companies having
9:08
teams just full of
9:09
assistants, they have people they can delegate to. And I think the AI workforce
9:12
is a very similar
9:13
pattern, so you want to delegate to an autopilot, which can go do work,
9:17
obviously come back and
9:18
escalate to you. For us, one of the flagship features that we really focus on
9:22
is our approval
9:23
and escalation process. How can we make sure that these autopilots can come
9:27
back, raise their hand,
9:29
and ask for help when they need it, just like a person delegates. And so I
9:33
think for us,
9:33
it's a co-pilot of this autopilot we really are building for the autopilot. We
9:36
think that's going
9:37
to come faster than we think. And Coach, you did a really good report on this
9:41
in November. I
9:41
think you want to see the state of the AI report where they look to past
9:45
examples of what co-pilots
9:47
we know, chess was one example they actually used. People thought humans plus
9:50
machine will
9:51
be the machine for a very, very long time. It was actually in a couple of years
9:54
before the machine
9:54
became better than the human plus machine. So the certain use cases where those
9:58
autopilots are
9:59
really critical, I think when you're evaluating, just bucketing them up in
10:04
those two categories
10:05
is a really helpful way. If you're a little bit confused right now about the
10:08
terminology,
10:09
because it's probably going to take a couple of years before this terminology
10:11
becomes
10:12
better defined and better adhered here to my companies out there.
10:17
I think that's fantastic advice. And it's a really good segue for the next
10:20
question that I have
10:20
for you, Daniel. You mentioned when you did your introduction of relevance, you
10:23
guys have some
10:24
predefined agents within your organization, but then there's also you kind of
10:27
said, "Sky's the
10:28
limit. We want people to be able to build their own agents for whatever their
10:31
company or their
10:31
use cases are." If someone here is listening to this and they're like, "I'm not
10:35
sure if I want to buy
10:36
one of those personified agents that we're seeing on the market or I feel like
10:39
I have a very unique
10:40
use case that I need to build something for," when is the right time to use one
10:44
or the other? When
10:44
is the right time to get one that's fully built out and baked? The company's
10:47
done it for you versus
10:48
when's the right time for me to go in and put in the work to build something
10:52
custom to myself?
10:54
Yeah, so out relevance, we kind of broadly define those supersonas as recruit
10:58
ers and builders.
10:59
So ultimately, we think every single company is going to have these builders in
11:03
the company,
11:04
which are going to be able to manage and deploy the AI workforce for their
11:07
organization. I think
11:07
that's as maturity grows, that's going to become the de facto way of doing
11:10
business. It's going to be
11:11
a critical role part of the business that people are going to be builders. But
11:15
given where we are
11:16
today, it's a very new concept. It's quite hard to do. We're really on the...
11:21
We look at what we
11:21
can achieve with relevance for agents. We always think to ourselves, we're very
11:24
much in the cutting
11:24
edge of what's possible. This will get easier every month. They'll become more
11:29
capable.
11:30
And while we're in this phase, I think it's extremely useful for people to have
11:33
a starting
11:33
point. That's a great place for them to kind of see the benefit, see the value
11:37
you get buying
11:38
from the organization. I think the biggest question we get does it work. And so
11:43
being able to show
11:43
the organization it works for a specific functional role or use case, a
11:47
specific workflow like
11:48
Carl's mentioning, a specific process, that's a really powerful thing. That
11:51
gets you a lot of buying,
11:52
it lets you achieve a lot of things. And then you can invest to do this in more
11:55
areas of the business.
11:56
So how do the viewpoint we take on that, a relevance, we do believe everyone's
11:59
going to be a builder,
12:00
but we want to serve the recruiters today. And the best way to serve those
12:03
recruiters is give
12:04
them a flagship template that can be a starting point for them to build on. Now
12:08
as part of our DNA, we want to customize, we want to configure every customer's
12:13
bush or BDR agent is going to be different. People do research in different
12:16
places. We don't
12:17
want a black box thing that just goes to LinkedIn, pulls some, you know, your
12:21
new title analysis,
12:22
congratulations, and your new role, right? If you're looking for those
12:24
solutions, I think that's a
12:25
lot of what personified agents are doing today. That's not to me not a true
12:28
autopilot, a true autopilot
12:30
is what, you know, is being mentioned before about having those multi-steps,
12:32
being able to make
12:33
dynamic decisions, but those decisions and actions being made based on your
12:36
process. Are you checking
12:38
X for real research about your business, how are you using your CRM, how are
12:41
you using the other tools?
12:43
And so that's really mimicking what your employees do. They sit down, they've
12:47
got a plethora of tools,
12:48
they've got a bunch of decisions they make us pilot a job. Let's define one of
12:51
those jobs to be
12:52
done. Let's start with a kind of a template for an agent that's customized it
12:56
so that,
12:56
and then let's go from there. And once you have that buy-in, it feels very easy
12:59
to build on two
13:00
cases. And we see that expanding very rapidly throughout the organization. So
13:04
our customers who
13:04
start with a single use case very, very quickly start moving into others,
13:08
whether that's going to
13:08
market, going from outbound to inbound to database farming, whether that's
13:12
going to another team
13:13
lifecycle marketing, that's a big one, speaking up right now. So I think for
13:17
most people out there,
13:18
think of your starting point, and then build up from there, find something that
13:22
's operation
13:23
expensive right now, but has a clear repeatable job to be done. And that's a
13:27
great entryway.
13:28
But ultimately, we obviously want you to be a builder. And I think that's what
13:30
you should be
13:31
thinking about. How are you not locking yourself into the day to make sure that
13:34
in a year or two,
13:35
when you need to have a much broader set of agents that you have that ability
13:40
to expand.
13:42
That's super helpful. And Kyle, I'm curious, you've talked before about
13:45
templates,
13:45
like I know something copy is really good at its templates. Is that something
13:48
that you,
13:48
like if you've seen that, I feel like you guys have built out a lot of those.
13:52
Is that something
13:52
you agree with? Or do you have like a different take there? Very similar take.
13:56
I would say the
13:57
way that we think about the the templatization of these workloads is that there
14:01
are some processes
14:02
from company A to B to C that are more or less the same. So Daniel mentioned
14:07
the account research.
14:08
Let's just keep going down that route. Because I'm sure a lot of listeners here
14:11
are familiar
14:12
with that BDR work or that sales team work more similar than it is different
14:17
from company one to
14:18
two. And so you can have a really nice templated workflow that gets you 70 or
14:24
80% of the way there.
14:25
But then to Daniel's point, you need some sort of means of customization to
14:29
make sure that you
14:30
can get as close to 100% as possible. So that when you are deploying that agent
14:34
or when you
14:35
are deploying that workflow inside your CRM, it's delivering the full set of
14:38
value that's
14:39
bespoke for your business. So the templatization of these workloads is really
14:43
critical. It just
14:44
depends on how similar the processes are from company A to B. Things get a lot
14:49
more complicated
14:50
on the marketing side. And of course, there are other sales use cases that are
14:53
more complicated.
14:53
But like the marketing side is really hard. The way that company A approaches
14:57
thought leadership
14:58
totally different than the way company B approaches thought leadership to so to
15:02
some extent,
15:03
we can provide some sort of templatized workflow. But there's a lot more
15:06
customization that's
15:07
required there. So when Daniel talks about the builders, I totally agree. Like
15:10
these are going to
15:11
be super high leverage people that every company is going to hire. The way that
15:15
we think about it,
15:16
the similar to Daniel is a go to market AI architect. You need to have the
15:21
domain expertise
15:22
of what makes for an effective process. And you need to have the AI expertise
15:27
to be able to design
15:28
and deploy the prompts correctly so that you can put all of these processes
15:32
together into a workflow
15:33
that is then deployed across your company. And I think that combination of
15:37
human strategy,
15:38
outsourced to AI to get a lot done is extremely important. But I think it
15:43
almost always requires
15:44
human on the back at least right now, human on the back end to sense check and
15:48
do the last mile
15:49
fit and finish and polish types up to make sure that the work product is as
15:53
perfect as it needs
15:54
to be for prospecting or marketing content or whatever it may be. Absolutely.
15:58
Sunil, I have a
15:59
question for you that I've been really excited to chat about, which is I've
16:03
heard a lot of different
16:04
things on the market. And this is funny. I feel like the session is just for my
16:06
own edification
16:07
of the questions that I've had. And I'm hoping that people listening here also
16:09
get the fine value
16:10
in this, which is we hear a lot about agents that can do like multiple parts of
16:14
a role. Like you
16:15
guys have a digital engineer that you've talked about at Triple that can
16:18
obviously do you talk
16:19
about like step by step, they're doing multiple things versus I know there's
16:22
also agents on the
16:23
market that like hone in on one specific part of a role or one specific part of
16:28
a job description.
16:29
Can you just give me your take on like when is the right time to have one that
16:32
does that full
16:33
end to end versus like when's the right time to have an agent that focuses on
16:37
one particular task
16:38
within an organization? Are there pros to cons to either that that you've seen?
16:42
Yeah, I think I'll sure how we're looking at it. And you know, by all means, I
16:47
think the space is
16:48
evolving so fast and the other folks in Tannawal agree that even you know, the
16:51
level of capability
16:52
of some of the foundation models that are coming out and what they can do is
16:56
changing at such a
16:56
rapid pace that our best thesis at this point is I look back even in my own
17:00
experience of you know,
17:02
my previous roles building vertical specific software. And there was a need to
17:06
build software
17:07
very focused on specific industries going deep into the business process of
17:10
specific subverticals.
17:11
And that always in contrast to the horizontal off the shelf more configurable
17:15
software,
17:16
there's always this build versus buy discussion. I think now what's changed is
17:20
the build part of
17:21
the equation is becoming far more easier for most people and broadly applicable
17:26
with it.
17:27
So this tooling becomes something a lot more people within the enterprise can
17:30
use to build
17:31
applications, i.e. agents, so pilots, the next generation of software, whereas
17:35
it was restricted
17:36
before. So for us that the trade off has always been, you know, when we when we
17:41
homemade our
17:41
specific persona, a specific role in the company, how much more of that task
17:46
can we complete,
17:47
you know, to cows point, multi step end to end with a level of efficacy that it
17:51
can actually
17:52
displace a unit of human work in that role. So if we hold in on a specific
17:56
person within a company
17:58
in a specific role, then can we do one task really, really, really well and do
18:02
it out of the box as
18:03
consistently as possible within a specific subvertical. So it's almost once
18:07
again, it's the
18:07
horizontal applicability versus really going deep within one vertical one
18:11
business process
18:12
and just nailing that consistently and then trying to grow that out. So we've
18:15
definitely
18:15
taken that approach. And you know, when we think about where to think, you know
18:19
, which rules,
18:20
it makes sense to take that approach. Well, when when it's a really high
18:24
leverage task,
18:25
where there's a huge operational cost associated with it, like for us, we
18:27
picked sales engineering,
18:29
one, because I used to be an SE at Salesforce before I started this and my co-
18:33
founder as well.
18:34
And two, we know that there's always never enough of them in any go-to market
18:38
team and they're a
18:39
high leverage role. So it seemed to make sense that if we free up a very
18:42
specific unit of time
18:43
for them, you know, that has an impact on the org. And it also is a business
18:47
process that
18:48
these companies are used to buying software for that's the other thing. It's
18:51
really hard
18:52
to have conversations where we're coming in and talking about this new
18:54
transformative platform,
18:56
the longer sales cycle, because you're also educating your customer on what
18:59
this is and how
18:59
to roll it out. Whereas what we found success with is really nailing in
19:03
specific area where
19:04
they're buying legacy software and one to one displacing it and having a TEDx
19:08
still
19:08
on just that one business process. So it really matters what you're selling in
19:13
the capability.
19:14
I gave it from the perspective of what Tribble's doing, but I'm sure when you
19:16
go on more general
19:17
purpose, there's different ways to work. Absolutely. Now, Kyle, I've got a
19:21
question for you. I know
19:23
kind of what CopyAI is really focused on is like streamlining parts of the go-
19:26
to-market process.
19:27
But what's interesting about it is, as we're talking about AI agents and AI
19:31
workers, obviously,
19:32
this is very AI focused, but humans have to be involved. Like, I think that's
19:35
something that
19:35
no one is everyone is agreeing that humans have to stay involved in this
19:38
process.
19:39
I would love to get your take on where that is in the process. As we bring on
19:42
AI agents or AI
19:43
workers to help streamline things, what do they take on and what do we leave
19:47
for humans to still
19:48
manage? Yeah, that's a great question, Sarah. And I think it's a continuation
19:51
of what Sunil and
19:52
Daniel were just talking about. And let's use Sunil's example. Like Sunil has
19:56
the domain expertise
19:57
from being a sales engineer at Salesforce to know what is required to complete
20:03
an RFP response.
20:04
If you just go to chat GPT right now and you ask it to try and complete an RFP,
20:08
no chance are you going to get anything of value there? No chance. Like, it's
20:12
probably
20:12
going to create more headaches than it's going to solve. And so what in this,
20:17
where we are right
20:17
now in this era of AI, and probably this will be the case for a long time, is
20:21
you need some sort of
20:23
domain expert defining what the workflow is. So what are the steps in
20:28
responding to that RFP?
20:30
Where do you go to find that information? How do you distill that information
20:34
and then complete
20:35
that in a way that makes sense for teams? And the same is true for content
20:38
creation.
20:39
If you are an SEO expert, you need to have a codification of your process in
20:44
order to scale it
20:45
via AI. You can't just go to chat GPT and say, write me an SEO optimized blog
20:50
post. It's not going
20:51
to work. It's way more complicated than that. And the landscape of SEO
20:54
optimized content is always
20:55
changing. And so you need somebody who is able to codify what their process is.
21:01
When I write a
21:02
long form piece of SEO content, here's what I do. I Google it. I look at the
21:06
top three search
21:07
results. I read all the H2s. I brainstorm people also ask questions. I do this,
21:11
this, this, this,
21:11
I create a content brief, I then take that brief and turn it into long form
21:15
content. Here's how
21:16
I layer in my value prop. Like it's many steps. And this is why it takes days
21:21
or weeks to turn around
21:22
a piece of SEO content. Because it's, it's a lot of work. But if the SEO expert
21:28
takes the time to
21:29
codify their process in AI, then you can go reuse that over and over and over
21:33
again. So human strategy
21:35
on the front end. And like I mentioned before, even if the SEO expert goes and
21:40
codifies all
21:41
that process really beautifully, the output still isn't good. Like it's not
21:45
good enough.
21:45
We typically see that our rule of thumb is for every thousand words that AI
21:51
writes,
21:52
you need to spend about 30 minutes editing rule of thumb. So if it's at 3000
21:56
word piece on SEO
21:58
content, you're looking at an hour, hour and a half of editing to get it across
22:02
a line. And why is
22:03
that? Because these models are probabilistic. They're not perfect. They can't
22:06
follow instructions
22:07
perfectly, especially for long form content. And so there always has to be
22:11
right now, at least
22:12
human on the back end to do the last mile work. The AI is going to get you into
22:15
the red zone.
22:16
You need the human to get your cross a cool line. Absolutely. And speaking of
22:20
humans,
22:20
I'm going to go back to this is like a larger group question here that I would
22:23
love to hear
22:24
from each of you on, which is as we're talking more about agents and workers,
22:28
it's only a matter
22:29
of time if not, it's already happening that people are going to bring these
22:32
onto their team.
22:33
How are we managing those workers? Like you talked about humans there, Kyle. So
22:36
maybe this is
22:37
just a continuation of your question that I can go to Daniel and Sunil, but who
22:40
is managing
22:41
these agents or workers? Is there one person that's managing them? Is it
22:43
someone that we're
22:44
going to like, is it a new role that's hiring these people and like managing
22:47
these people? Or
22:47
how are you guys thinking about that internally or seeing your customers manage
22:50
these agents?
22:51
Yeah, I'll defer probably more so to Daniel on this because I think he's given
22:55
this more thought.
22:56
But my general take is what we've already mentioned before, which is there is
22:59
going to be what we call
23:01
a go-to-market architect with Daniel, call us a builder. There is going to be
23:05
those people
23:05
that are probably functionally oriented because they have to have that
23:08
combination. I know it's
23:09
something like a broken record, but I believe they have to have the combination
23:12
of the domain
23:13
expertise and the AI expertise. I don't want somebody who is a software
23:18
engineer managing
23:21
my workflows for my go-to-market team. They don't have the domain expert enough
23:25
. Most of them
23:26
don't have enough domain expertise to do that effectively. So I think it needs
23:29
to be a combination
23:30
and I see these AI architects embedded across different functions inside of the
23:35
organization.
23:36
I'm really curious to hear Daniel's take on this. You too.
23:40
Yeah, we spend a lot of time thinking about this actually because it kind of
23:43
defines how we
23:43
build our product. For a lot of what we do, the decisions we make are actually
23:48
very
23:49
closely aligned to what Kyle was just saying there by the software engineer.
23:52
If we think about software typically, you have domain experts with a bunch of
23:56
engineers,
23:57
we're building a great product to serve a market. When we talk about agents
24:00
that are
24:01
functionally doing a lot of the work that humans do, which is very dynamic and
24:05
very flexible,
24:06
very different. It's really difficult to codify that in terms of a specific
24:10
kind of an RPA style
24:12
code of codification where it's like, if this and that, which is what software
24:15
is, software is
24:15
deterministic. And therefore, if you are set up to have engineers building this
24:21
, you're kind of
24:22
going to fail because engineers are not going to be able to handle those
24:25
nuances and not going
24:26
able to handle all these different little pieces and points for every single
24:30
team, every single
24:31
personal organization, which becomes exponentially more difficult when you have
24:35
a larger organization.
24:36
So what we've done is in order to try to combat that is make sure that the
24:41
agent building experience
24:42
is designed for the subject matter experts. We want to bring that expertise
24:46
down as much as
24:47
possible so that the people who have the skills can distill them down into the
24:52
agents.
24:52
And the people who know how to evaluate it can also evaluate the output in the
24:56
same way that
24:56
a manager is evaluating their junior employees work in the same way that a
25:00
sales enable person is
25:02
coaching their sales team and how to do something. We want to distill those
25:05
expertise into the
25:06
agent and we want to have those same people managing. That being said, we're
25:09
seeing this
25:10
really interesting trend with our customers right now. The ones who are
25:14
starting to think
25:14
out, what does the next 12, 24 months look like for us, are starting to think
25:17
of this almost like
25:18
an AI workforce manager role. We've already got one customer actually who's
25:21
just basically promoted
25:23
the they were initially in RevOps. They started out the project on the go-to
25:26
market side. They've
25:26
now promoted them into a more of a AI workforce manager role. And their role is
25:31
actually to work
25:32
with the other kind of in their business RevOps equivalent in other teams,
25:36
personas, to help
25:38
educate them on how they should be deploying this for their teams. So we're
25:41
kind of having these
25:41
leaders within each team within this AI workforce manager, which is like a go-
25:46
to who can help
25:47
cross the gap on some of the knowledge pieces that they're missing for those
25:51
other teams.
25:52
And so we're seeing this really interesting trend where I think a lot of these
25:54
companies are going to
25:55
have this expertise internally that can help disseminate the first organization
26:00
, but ultimately
26:01
it has to remain within those teams. You don't have engineers hiring your STS.
26:05
You don't have
26:06
engineers hiring your new marketing person. Why would they be building the
26:10
agent? Which does that
26:11
role? You have to have the same teams function on that. And so for us, it feels
26:14
very much like
26:15
it's going to stay in those places. And so you're going to need people in those
26:20
teams to
26:22
are a little bit more comfortable with a more technical side. Sales engineers,
26:26
RevOps people,
26:27
like these tiny personas tends to be really good in sales. You have a lot of
26:30
these similar pieces
26:30
in other functions who can take the flag and push the team forward, but it has
26:36
to be deployed
26:37
throughout those organizations. Because as Kyle said, if you don't have the
26:41
domain expertise,
26:42
what are you doing later? Can you engineer go and sell to someone new? Like if
26:46
they can't do
26:46
that, then why are they building it? I think that's a really important thing to
26:48
keep in mind.
26:49
Absolutely. Sunil, are you seeing the same thing with your sales engineer? Is
26:52
it like a sales team
26:53
that's managing that? Or have you seen any like different management type
26:57
things within your
26:58
either your company or your customers? Yeah, I think, you know, in most cases,
27:04
what's been interesting
27:04
to see is a lot of companies have also stood up, essentially these, they're
27:08
calling it. Yeah,
27:09
I counsels like, how do I procure the software and starting to rationalize
27:12
across use cases and
27:13
then also to terminate who are the owners of the software? What's its current
27:16
procure? How do you
27:17
continue to maintain management? And I keep coming back to this build versus
27:20
buy, because I think
27:21
when you have more of a horizontal platform, like what Deidion and Kyle are
27:24
talking about in some
27:26
cases, you might have a broader function that needs to manage multiple
27:30
processes. Or when you're
27:31
purchasing something like triple, you know, we're specifically going in and
27:34
saying, hey,
27:35
it's mimicking the role of the sales editor, the proposal manager. So it's
27:40
augmenting the existing
27:40
team and it becomes part of that team. So you continue to manage it as you
27:44
would a team member
27:45
on that team, because we try to minimize the workload on the team to actually
27:49
build part of the product.
27:50
So we are more out of the box, if you will, as opposed to it being a little bit
27:53
more general
27:53
purpose to make it fit where use cases. So in that case, for us, the
27:57
implementation of rollout
27:58
really is, hey, how do we get the team to understand how to use the product,
28:02
how to put it in their
28:02
hands, drive adoption that way, and roll it out. So it's a little bit, you know
28:06
, once again,
28:07
I keep coming back to words, auto vertical, right? Because I think there is
28:10
this, when it's
28:10
much more purpose-filled, the way you get into the hands of folks and roll it
28:14
out is different.
28:15
Now, I will say the idea of actually measuring what the thing is doing, however
28:19
, is relevant,
28:20
right? And I think, you know, one of the things Daniel was talking about is
28:23
really interesting,
28:23
because sometimes multi-step agent is a black box. So it's like, you want to
28:28
ask, you know, if you
28:28
give a task to an employee and you say, hey, get this done, they come back and
28:31
they get it done,
28:32
you're like, wait, what did you do? What is that debug log of exactly how you
28:35
executed the steps
28:36
you needed to do in order to get that task done? Where do you learn how to do
28:39
that? You shouldn't
28:40
have done step number seven. So, you know, how do you make that relevant or
28:44
available to the users
28:45
that are managing this process? For us, it's a really important piece. So we
28:48
launched this
28:49
capability called tribalytic, which is essentially like opening the hood on why
28:52
did I make the
28:52
decisions I made to get to what I need to. And I think there's a flavor of that
28:55
that needs to
28:56
exist in any tool, especially when it gets into multi-step systems that are
28:59
going off and doing
29:01
make some people very uneasy. Yeah, absolutely. Well, we are almost out of time
29:06
. This has been
29:07
such an incredible session. I appreciate you guys so much coming on and talking
29:10
about this with us.
29:11
For anyone who's listening, if you want to learn more about triple or copy AI
29:15
or relevance AI and
29:16
the agents that they have, we actually have an AI worker job figure here, which
29:19
you mentioned in
29:20
the beginning, go check them out. They have booths. You can learn a little bit
29:22
more about what they do
29:23
and go visit their websites to learn more. So guys, thank you so much for
29:26
joining us today. This has
29:27
been fantastic.