AI in Knowledge Management
Empowering Customer Support Teams of All Sizes
Speakers
Watch "AI in Knowledge Management: Empowering Customer Support Teams of All Sizes", a fireside chat designed for customer support and success leaders looking to scale smarter with AI-driven workflows.
Discover how AI is transforming Knowledge Management, making it accessible and effective for teams of any size. Learn how AI tools can streamline documentation, close knowledge gaps, and enable support agents to deliver faster, more accurate responses—all while building scalable knowledge bases that grow with your team.
Featured Speakers: Jessica Herbert, Senior Manager of Customer Experience at Canvas Medical Jessica leads the charge in optimizing customer workflows and improving patient care through innovative technologies. With extensive experience in healthcare operations, Jess shares actionable insights into AI-driven customer support strategies.
Danielle Murphy, Head of Support Engineering at Spacelift Danielle specializes in leveraging AI solutions to streamline processes and empower teams. She brings a data-driven perspective on improving operational efficiency and scaling customer support.
Zac Hodgkin, Director of Support at Panther With over a decade of experience in technical support operations, Zac specializes in optimizing workflows, scaling support systems, and enhancing customer satisfaction through innovative tools and processes.
Moderator: Marty Kausas, Co-Founder at Pylon Marty is leading the charge in building the next-generation support platform designed for B2B companies. Before founding Pylon, he held engineering roles at Airbnb, invested through Andreessen Horowitz’s scout fund, and served as a software engineering intern at Yelp and Qualcomm. His extensive experience will help ensure the session is engaging and insightful.
Key Takeaways: Unlock AI tools to create and refine knowledge base articles as tickets are resolved. Identify and close knowledge gaps automatically, enhancing team efficiency. Leverage AI chatbots and workflows to provide faster, smarter customer support. Learn how smaller teams can implement AI-powered KCS workflows to achieve big results
Transcription
Transcription was done by AI. It may not be fully accurate.
Marty Kausas:
about this. And can everyone see my screen? Maybe Zach, you can confirm, Danielle? Cool. So, yeah, the topic today is really fun. And, you know, a lot of what you hear about or like webinars you see or things on LinkedIn end up being about when you think about AI in support or post sales, you're often thinking about AI deflection and trying to answer questions just automatically with AI. But there's a whole nother part of post-sales and support and success that includes knowledge management. And this actually ends up being, especially for B2B companies, an even bigger part of what AI can do, but it's much less talked about. So super excited to chat about this topic today with Zach, Danielle, Jess, and kind of go through their experiences, how they think about it. And yeah, super, super honored to do that. So the quick agenda that we're going to go through here. First, I have to shill my company. I'm sorry, that'll take a couple minutes in the beginning. So we'll go through just quick intro on Pylon and things that we're thinking about on the knowledge management side. We're going to go to the panel's discussion, you know, I'll go through intros. for everyone. And then we'll go to a Q&A. And if you have questions throughout, you can always just drop them into the Q&A section. We can see them live as you're either commenting or asking those questions that can be upvoted that we'll get to at the end. So yeah, really excited for this. And I'll dive in with my quick advertisement. I'm sorry, ad blocker won't work today. So Quick about Pylon, I'm one of the co-founders. We're building the first customer support platform for B2B companies. Our goal is really to bring together the entire post-sales team to work together in one place instead of having support, success, solutions, professional services all live in separate systems. We're bringing all those teams together and actually really just combining all those products into one. A lot of people like us because you can offer support over shared Slack channels or Microsoft Teams instead of the traditional ones. AI batteries are included, as you'll learn today, and of course, you want a modern tool. So quickly, just setting the stage for how a lot of people are thinking about AI and knowledge management. Let's go through quickly a lot of how people used to think about this. So you find gaps in your help center, your documentation, you write articles, and then you try to keep your knowledge base clean. And so before you might have a question and you shoot it into a Slack channel and say, hey, here's a gap. We should go write an article for this. And then this is missing and supposed to say something. We'll ignore it. Also, when you write articles, you would reference a bunch of old tickets and then write articles from scratch. And then in terms of keeping knowledge bases clean, you check for duplicate content. You try to maintain consistent formatting. And all of this is a lot of work. And rightfully so, because this ends up being really important for your customers. new ways that we're seeing the industry move are, hey, AI can help you discover what gaps exist in your knowledge base and help you find references to the questions that people are asking about those topics. AI can help draft articles for you. So not like completely just rewrite them and publish them, but kind of human in the loop style, just help you create things from scratch faster. And even based on previous responses you've given to other customers. And then finally, having AI potentially detect similar content so you're not rewriting the same thing, and then even writing articles into specific formats for consistency. Okay, so we're done with the ad. I'm really excited now to go to our panelists here. And so, yeah, we can go through one by one, and I'm going to ask each of you to introduce yourself, introduce your role, where are you based, and then what your company does to start. So Jess, we can start with you.
Jess Herbert:
Yeah, nice to meet everybody. Thank you for joining. My name is Jess Herbert. I work for Canvas Medical, and I'm our Senior Manager of Customer Experience. Canvas Medical is an EMR that is EMR at heart that is accelerating everyday medicine. So we have APIs, SDKs, database models, everything to help customize our users to help care for patients.
Marty Kausas:
Thanks, Danielle.
Danielle Murphy:
Hey everyone, I'm Danielle. I'm based in Dublin, Ireland. I'm the head of support engineering in Spacelift and Spacelift is essentially a platform that will help you automate and optimize your infrastructure as code workflows. So what we basically do is we make managing your cloud infrastructure a lot more efficient, scalable and secure.
Marty Kausas:
And maybe I'll add in one more part of the question. Danielle, how long have you been at Spacelift? And then also how long have you been in the industry overall?
Danielle Murphy:
So I've been in Spacelift coming up two years in next month. And for the DevOps industry, I'd say maybe two or three, but cloud infrastructure, I've been in for about five, six years. This is like more kind of specialized. And for this one, I'd say about two to three years.
Marty Kausas:
Awesome, and just really quick, rewinding to you, what about yourself?
Jess Herbert:
Yeah, I have been at Canvas for, it's going on seven years, I believe, this year. So I was actually our first customer-facing employee way back in 2018. And before that, I wasn't actually in customer support directly, but I was caring for patients in clinical care, where I then moved to kind of at the elbow or side by side support for EHR users and then kind of evolved to here.
Marty Kausas:
Awesome. And then Zach, sorry for rewinding there.
Zac Hodgkin:
Totally cool. My name is Zach and I do support things at Panther and Panther is a cloud sim. So basically if you don't know what a sim is, it basically collects a bunch of logs across different platforms and then you can run detections and it's, um, you know, security teams can use it to do investigations based on like things that they're seeing in the logging. Um, and then the other questions, I live in Ann Arbor, which I already said, I said that earlier. Um, and, um, I've been in security, well, cybersecurity for about eight years now. Um, and yeah, I think that's, Is there anything else that I need to answer?
Marty Kausas:
Yeah, that's great. And maybe starting with you on a question about knowledge management, like how do you think about knowledge management? What are like the core components as you think about it broadly?
Zac Hodgkin:
So I think like in terms of knowledge management, like in the most general terms, Um, you know, it's just the process of creating, updating and maintaining, you know, common questions or issues to like quickly solve problems, whether it's people on internal teams or like the, the best thing about it is like putting that stuff in front of customers for them to like self serve. So that way they like. never actually engage with support teams. But yeah, knowledge management is important outside of support teams and customers too. It can also help out your other internal teams.
Marty Kausas:
And can you talk about at Panther, what is the team structure? Is there anyone who specifically is dedicated to knowledge management or is it split across the team?
Zac Hodgkin:
So in terms of people that contribute To like our knowledge base is it's it's mostly our support team. We do have some people in across other teams that do contribute from time to time and then like how we have it split up. is we basically have like two tiers of contributors to the knowledge base and actually shout out to Sally since she's the one that implemented this at Panther. We have we have KD publishers who've been like vetted in the content that they're creating and basically they can, you know, just go and create new content. And then we also have just like regular contributors. And those people go through like an approval process to ensure that like all the content is meeting our standards to be like published to our customers or internally. But yeah, it's mostly, it's actually it's entirely our support team.
Marty Kausas:
Got it. Danielle, how does that compare to what you guys have set up? Is there, are there dedicated resources for knowledge management? Is it handled by you or other people on the support team?
Danielle Murphy:
Yeah, no, it's pretty similar. For us as well, it's mainly our support team. And I think I like it that way as well, because we have a lot of interaction. First, we're kind of familiar with how they like receiving information. So, for example, I think we're all guilty of not loving reading documentation, where our knowledge base articles are kind of like, okay, for this error, step one, two, three, where our documentation is more focused on, oh, let me tell you all about this feature. So it's the support team and we're a small but mighty support team. So there's three of us at the moment and we all contribute to it. And we don't have a dedicated person, but again, I'm quite a fan of that because it's just not one person's view or one person's information. It becomes like a shared collective. and I'm based in Europe and the two other guys in my team are both based in America so we'll always have someone on answering tickets as they come in but then we have it split up where you have your on queue time and your off queue time so right now for off queue time the focus is on getting our knowledge base articles updated so that's how we have it split and it's working quite well so far.
Marty Kausas:
And so do you have dedicated times of the day where, so you said on cue, off cue. So, Hey, I'm like, I'm, I'm the directly responsible individual for taking items off the queue during these hours. And then separately, um, there's an expectation that I'm writing content. Is that kind of how it works?
Danielle Murphy:
Yeah, exactly. So I could spend half my day, right? I'm on cue, any tickets that come in, I'll jump on, I'll answer. And then after that, when I have my off cue time, that's when I'll go and review what I've looked at. I'll be like, okay, this one doesn't have an article, let's get one created. And then same for the next person, when I'm off cue, they'll be on cue, so we still have coverage. And then when they're off cue, they'll do the same with their articles, or if that's maybe the time they're learning, and I just continue that.
Marty Kausas:
And can you help me give us, can you give us a sense, like, is it, is everyone 50, 50, um, in terms of like time split or is, are some people writing more content than others? Is there like a reason to, to distribute it differently?
Danielle Murphy:
So right now I'm 50, 50, and then the two guys on my team, they're 80, 20, kind of just the way our time zones are working out to make sure that we always have someone on the queue. So for anything urgent, we can just flag straight away.
Marty Kausas:
Awesome. So it's actually amazing. And I want to pause really quickly here. It's very rare that we find people who are so dedicated. Well, I guess the bigger you get, the more resources you generally have. But especially if you're a more lean team, it's hard to dedicate real resources towards writing content and keeping that up to date. So I just want to give a shout out. That's really hard to do. And it's awesome that you guys have prioritized that.
Danielle Murphy:
Well, Pylon has definitely helped us a lot there. And actually, with your introduction of Pylon, One thing that I find with using Pylon, and Marty did not pay me to say this, but I think Pylon really helps highlight the value that support brings to like the wider organization. I think sometimes I can get overlooked, but Pylon just makes it a lot more accessible for everyone to see and a lot more visible that, oh, this was how easy it was to do. So let me do this quickly so I can show the other teams what we can bring to the table.
Marty Kausas:
Awesome. Yeah, I will definitely double click on that. But Jess, I want to loop you in really quick. Give us a sense, what's the lay of the land at Canvas when it comes to how you think about knowledge management and what does the team look like?
Jess Herbert:
Yeah, so our team also very small but very mighty. There's three of us. And we actually encompass everything from implementation, onboarding, support, and anything in between. So we have one one, uh, support agent who puts the most of her effort into the knowledge center while also being in the queue. So I aspire to be able to get to a point where we can have on cue and off cue, um, shout out to Alyssa on that one. And, but we all contribute. So if there's something that, you know, I'm answering a ticket and something comes up, I make sure that we update the article right then also shout out to pylon that it's easy to do right away. Um, and. We all, even after an article is written, we all, the three of us will review it together, make sure that we have the same understanding so that we're supporting our customers in the same way.
Marty Kausas:
Awesome. And can you give us a sense as well, like you're, so you're in the healthcare space and I'm sure that's a little different. It might be a little different. I assume. Can you paint the picture of like what type of support you're getting maybe like general volume and like where people are submitting tickets?
Jess Herbert:
Yeah, absolutely. So we have some of our customers, we're the EHR, our customers are care delivery organizations. So they are actively caring for patients or supporting the people that are caring for patients. So there's urgency when they message in because it's likely impacting patient care right at that time. It's essential that we have this documentation and it's easy to read. It's quick to reference and unblocks them for what they need. Um, being healthcare, it's like I said, there's, there's urgency and we try to keep a very quick response time for that reason.
Marty Kausas:
And do you feel like there are any differences that, um, you have being in healthcare? Is it just like the priority and urgency of the requests or, uh, yeah. Do you think it's any different than traditional B2B companies?
Jess Herbert:
Yeah, I actually do because, um, there's the layer of patient safety. So every ticket that comes in, if it's a user having. you know, needing functionality, does that impact patient safety? Is that going to have a negative outcome on their patient? Could it cause harm? That's always at the top of our mind. And we always want to make sure we're enabling all of our users to have the resources that they need to give excellent care to their patients. So yeah, absolutely more complicated.
Marty Kausas:
Awesome. Okay, Zach, I want to shift back to you. Let's get into knowledge management and kind of where we're going. So I guess you talk about how you started to think about AI in your knowledge management workflows and where you see that starting to go.
Zac Hodgkin:
Yeah, that's a good question. Honestly, I never like had the like the the idea of like, oh, like, how can AI help me, it's kind of like, fallen into our lap when how we're going through, you know, just like going through the support process, like when, like, people started using chat, gbt, like our TSCs, you know, starting to starting to use that to, like, help improve their messaging, or like, creating agents with Chats and VT to like reference our knowledge base and our documentation. And then like this like final step of, well, it's not the final step, but like the current step that we're in where like the tooling that we're using, which is, is Pylon has like, you know, been listening to us and it's other customers and has implemented some like really cool stuff that have just like made the lives easier of our, our support agents. Um, And let me know if you want me to talk a little bit.
Marty Kausas:
Yeah, well, I guess people probably want specifics here. And so yeah, what are like the details? So you mentioned like, you know, AI starts coming out, chat GPT starts getting used by everyone. And by the way, was the team concerned at all about like security stuff with like people pasting into chat GPT? Was that ever like an internal discussion?
Zac Hodgkin:
Yeah, I think like the two big discussion points like with chat GPT was like, don't be an idiot in post, like confidential information into it or whatever. And then the other part was, you know, there, there are some times when I like CTSC is responding to customers. with like the most obvious copy and pasted like chat GPT answer. And I had to be like this, like you could have said the same thing in like one sentence instead of like posting four paragraphs or whatever. So yeah, like there's definitely some like.
Jess Herbert:
Oh, sorry, what were you saying? I hope this finds you well. And then it's a good big giveaway.
Zac Hodgkin:
Yeah, exactly. And so making sure that TSCs are like proofreading what they're actually sending to customers and like also making sure it sounds like them and not like, you know, someone else or whatever. I think we're like the, the two big things, uh, you know, when they started adopting chat GPT, but like one of the other cool things is we used to use grammar, grammarly internally and like that, like no one needed to use that anymore. Once the, once TSC started like,
Marty Kausas:
And one thing I also want to highlight that you said is that instead of saying, hey, how can AI help me, instead taking the approach of what are our current processes and seeing how AI can naturally be layered in, I feel like that's the correct way to approach it because a lot of people, and then also in that way you're you're just optimizing what you already have. So you know, you're going to get gains, right? It's like, hey, I do this process all the time. How do I make it faster? Right? Or how do I not have to do it all the time? How do I not have to think about it? I think that's really important. And I feel like across not just like knowledge management, but just everywhere, we should be thinking that way instead. So yeah, that that makes complete sense. Danielle, could you talk about how you've been thinking about AI knowledge management and Maybe we go a little deeper with you on what workflows have you noticed and what were you guys doing before and what have you started to try doing now with AI?
Danielle Murphy:
Yeah, so when I started, we weren't really using any AI for building out our processes. So it was a very painful time of going through Slack, reviewing cases, being like, okay, I'll document that, I'll document that.
Marty Kausas:
But AI is definitely- And really quick, were people sending messages into a Slack channel for you? Or when you say you were looking in Slack, what do you mean?
Danielle Murphy:
Yeah, so we support customers through Slack. So we'll have shared Slack channels with them and then we have a support triage channel where it will come true for us. So that was how I built up a lot of my knowledge when I started was going back by case on case on case and finding out that way because we didn't have like an internal knowledge base for me to go through there. So I was constantly finding out things that way and when I wanted to start building out an internal knowledge base, it was just like copy paste, linking conversations, thinking like, how will I award this? Does this make sense? That kind of way. AI has definitely helped big time there, especially with Pylon has a generate article feature. And for me, that has been such a game changer because with one click, I already have something to start with. and my manager always has the perfect metaphors to reference so I remember previously I was just saying that I really want to find the time to create all these articles or like an FAQ section but I was spending my time answering the questions, the exact same questions as they were coming in and so the metaphor he used was that cartoon of the two cavemen pushing a cart with square wheels and someone says, oh, like here's a circular wheel and they're like, oh no, thanks for too busy. But with AI, I feel like I have the AI putting on those circular wheels for me while I'm still pushing the cart. I don't actually need to stop myself and like pause what I'm doing to change. So that's been great and
Marty Kausas:
Daniel, just to go into more detail on that really quick, can you help us visualize when you say AI article generation, what is that pulling from? Is that all your past conversations or how does that work?
Danielle Murphy:
So the way we do it now is in Pylon, we'll have our issue. So that will be like the back and forth conversation with the customer. And previously I would have just kind of like took snippets from it on my own knowledge and like built out that way. But now with just the generate article button, it does it for me. So it will summarize the conversation and any steps I've mentioned, any screenshots I've put in or error messaging. And then obviously I know myself like, okay, we actually didn't touch on this specific error that comes up with this feature but I know it does come up so I'll just add it in there and like that's been such a game changer because I find that the hardest part is just starting so like thinking like okay how would I say this which way would this flow the best but with that I just have a starting place I can just like dump all my thoughts into it and it will fix it up for me so I'm not worrying now about like oh I have to do this I'll have to put time aside to do it I can just get started clean it up and I have something to work on.
Marty Kausas:
Yeah, 100%. It's actually incredible. We get this type of feedback all the time and obviously use this feature a lot. And for the audience, it sounds so simple that you almost gloss over what's actually happening. What Danielle just described is so simple and such a, if you've used ChatGPT ever, it's a very obvious use case. Like, hey, take an existing ticket that we already had, or a conversation that we've already had with a customer, and click a button to have AI take that and generalize it into an article draft that you can then edit and make into your own. So basically think like the same thing as copying a whole conversation that you've already had with the customer, put it into ChatGP, tell it to write a knowledge-based article, and then going from there. But that is operationalized now for you guys, right?
Danielle Murphy:
Yeah 100% and when I actually did an internal demo I went through the flow of someone reaching out and first the AI agent tries to answer it but it's something that we don't have a knowledge base article on so it gets routed to a support engineer then a support engineer gives the answer and What I had said in our demo was that normally that might have been where it ended. So the next person, if they run into the same error, they have to look through Slack, look through any internal like Notion docs that we may have. But now with AI, we can just click to generate an article. Boom, the article's done. Next time when a customer reaches out, the AI agent has that information to just give them an answer straight away. if they're running into the same error they ask a question straight away they have their solution they don't have to wait like maybe 20 minutes an hour for someone to be available they can just fix it and move on while either evaluating or just using their product and it makes it a lot more scalable especially for like a team size like ours it's just such a game changer to have something to do all for you
Marty Kausas:
That's awesome. In that demo that you showed to the team, the AI, I assume, was able to answer the question automatically the next time that same question was asked.
Zac Hodgkin:
Yeah, exactly.
Marty Kausas:
That's awesome. And I'm curious, maybe a question for all three of you at once. Do any of you have specific guidelines that you're trying to establish for like how articles should look or feel or tone or anything like that? And just maybe priority with you. Yeah.
Jess Herbert:
Yeah. Yeah. So we actually, this is our second version of our knowledge center. And our first version was, Um, there was no AI and it was all human input and it was a very specific template. Um, if anybody. Has seen the YouTube video of the exact directions challenge where the kids tried to tell their dad how to make a peanut butter and jelly sandwich. Um, and they don't include all the steps. So he does them literally. That was the basis that we used. And at the time I was like, if a, if a robot could do it, then that's how it should be written. and before AI was there. So that is coming back and is important because of our AI agents. And now we have a little bit looser template. It's a little bit more narrative. It's not that exact challenge. So the peanut butter and jelly sandwich would not be accurate, but it gives our users exactly what they need to know and in a confined way. We also did enable the pylon agent, which we found was answering pretty good most of the time, but we can definitely make our documentation and our knowledge better to make that agent even better. I think one thing, Danielle, you were talking about the article generation and how it takes that single article. One of the things that I'm super excited about is we also have this ability for an issue agent or an issue AI that will look past at our old responses across other tickets. So when we use that for a response and then create an article off of it, it's even more comprehensive using that generate article. So I'm super excited for that.
Marty Kausas:
So basically, you're using AI to answer a customer question based off of previous customer responses you've given. And then at the end of that, you're generating an article based off of what the AI drafted for you.
Jess Herbert:
Yeah, basically. Yeah, that's comprehensive all the way around. And it's super important, especially in healthcare, we have protected health information, like, we have to be very specific about what AI has access to what it doesn't, and how it's generated. So, again, these articles are just a starting place. And we still have to go through them with our same process to make sure they're accurate, there's no harm to patients, our users can, you know, continue on with their workflow in the day when they're seeing 20, 30, maybe more patients a day. So yeah, it's great to have that starting place to be able to have a lifting point.
Marty Kausas:
Yeah, and one thing I want to bring some attention to is, you know, you mentioned the issue copilot. So basically, hey, drafting responses to customers. We find that for B2B, that's just way more practical in a lot of cases, because you care about the accounts a lot more, you're higher touch, the accuracy is way more important. like if you have a million tickets per day, okay, like some people are going to get bad responses, that's okay. And frankly, the consumer questions are just much simpler. Our observation has been that if you're a consumer company, you probably have 80% of your questions that are pretty easily automatable with like the AI chatbot stuff that everyone's pushing and bring a ton of value there because it's easy to quickly deflect those questions. But in B2B, it's inverse. You have maybe 20% of your questions that are simple questions that could be actually just straight deflected based on content. And the rest are actually way more complicated. You have to go into a bunch of internal tools and pull data about the account and look at the history with them and what you've discussed before. And so Yeah, um, I guess Zach, um, coming back to you, I guess, how do you think about, um, like automate automated responses versus, um, kind of, uh, like co-piloty, uh, agent assist, um, functionality and, and how, um, yeah, how you think about that?
Zac Hodgkin:
Yeah. I mean, um, I'm, I'm pretty much like in the same boat from like what I've seen with, you know, an AI agent versus like an AI assist in messaging, you know, a lot of our customers, the questions that they ask are very custom, they're very specific. It's not super easy to just hit it with a KB article and nail it. Like the KB articles there to like assist. Um, But one of the things that I wanted, the thing that Jess called out about the assist bringing up a message that was used in a ticket and then a KB was created. Literally yesterday, one of our TSEs had shared that this pretty complicated question came in and they used the auto assist when generating their message. And it actually suggested a very complicated but very correct workflow from over a year ago from when that same TSE suggested it. And so I was able to solve it, done and dusted, and then also KB. So hearing her say that's the workflow is just so funny, because it's super true.
Marty Kausas:
And it's so cool. One thing philosophically that we believe internally is that, especially for B2B, you should be really, really sensitive about not cross-pollinating data between customers. And so, for example, if you do turn on an AI chatbot, In B2B you really want it just pulling from public data sources or things that like you've human in the loop reviewed and like it's out there, right? And anyone could see it versus when you do like the issue copilot, like drafting a response for you, you can use more data for those previous responses because you'll have eyes look over it and verify that nothing's going out. So that is pretty incredible. That's honestly amazing that that was something that happened. And yeah, I'm sure that's happened to us internally, but I have not heard that yet. Maybe we're just getting so used to these cool things that it's starting to get normalized really fast. But we do have these magical moments where it's like, oh, wow, that just works now. That's cool. So let's talk about what are things that you've tried that haven't worked? And I'll like kind of open this up to everyone. Are there any like workflows or AI things that like you were bullish on but like kind of weren't successful and any reasons why you think that? I'll kind of leave this open-ended for anyone.
Danielle Murphy:
Yeah, I'm happy to start with this one. So previously we were found that when we were trying to use an AI agent that it was exaggerating a lot. Now, as you said, like what B2B, a lot of the issues that we come across and the answer is it's not just copy and paste, like there's a lot of different moving parts. So it was exactly that for us where policies is a feature we have and it's done in a specific language. But the AI was given the answer for a different language, which would make the policy useless. So I was really afraid then of letting like an AI agent be customer facing, but the new workflow and one we use now with Pylon is that it's only using information from our knowledge base center. So then we know what it's saying is accurate. It's like relevant, it's updated. And if it's not confident on it, instead of trying to answer and just like ruining the trust with the customer, it will just let them know, like someone's going to answer this soon. And it's a lot more effective, but as well, because we have the AI agent, if someone's stuck, and it's something we have documented. Instead of them having to wait to get support from a human, they can get their issue resolved straight away if there is a knowledge base article.
Marty Kausas:
And going a little deeper into that as well, are you, when you have played around with agents, have you gated them for only segments of customers or have you segment like certain types of issues or have you, have you tried that or are you letting, or have you tried just letting it run for, for all like first, first passes?
Danielle Murphy:
Yeah, so the way we tried it previously was kind of all first passes, but it was just to the support engineer. So it was more like a co-pilot and it was like, this is what I would answer. And we were like, well, it's great that you didn't answer it with that because it's not accurate.
Jess Herbert:
We did, we did something similar and actually turned it on just for our Slack customers. Um, and it has like, as soon as the agent responds, it says, Hey, I'm your AI agent. Like, I'm going to be here. Don't worry. I'll, escalate to a human. And it was really well received, even when the answers weren't totally perfect. So our customers are very generous with their time and patience in relation to the AI agent.
Marty Kausas:
And one thing, Zach, I want to loop you in on something. So I think a lot of people in other industries, when they think about agents, they're making them try to impersonate humans, which I actually think does not work in support. People get really mad if you're like, hey, this is Anna from support. And it's like, you're tricking them, right? You're making it seem like there's someone who isn't. And so Zach, can you talk about like how you the profile that you created for Panther and yeah, what that Yeah, so I mean we
Zac Hodgkin:
We named it Peter Panther, and then we like did an AI generation for the image. And then in its canned messaging, you know, we obviously, you know, the thing that we wanted to call out, because, you know, our TSCs have really good relationships with their customers. We don't want, you know, an agent impacting the relationship our support team leaders have, you know, we called out the obvious of like, hey, I'm an agent and I'm not going to just like keep messaging you I'm going to like take a crack at this. And if it's not okay, like I'll get you over to like a smarter person that like can answer this for you or whatever. So like very obvious AI guy that is trying
Marty Kausas:
Yeah and we've seen that across the board. We actually initially for our own product when we like you're in the create agent flow we initially bought like this pack of you know human faces that you could like stick on to the AIs and we found no one was using them. People thought it was super creepy and it was just like not cool. And especially if you're trying to pretend to be a person and then you respond incorrectly, that just makes your company look so bad. Right. And so it's, it's better to just get ahead of it, even maybe say like AI beta and like that way, like it's like, okay, it's a beta, you guys are trying out, it's not the real thing. So highly, highly recommend when people do start trying to implement whatever conversational agents or AI chat bots that they don't try to fake a person. They try to make it clear that this is just a first pass attempt to help speed up the process.
Danielle Murphy:
We have similar one that it will say like, hey, I'm in training, I may not like always hit the mark, but I'm here to help. And it kind of reminds me, I think it was Duolingo that one of their notifications now after constantly trying to remind you to do your lessons is that, oh, these notifications don't seem to be working. I'll stop. And I think they'd said that with that notification, people kind of felt bad and was like, oh, no, no, it's not working. So they returned to their lessons. And it just makes me think that's quite similar when the AI bot's like, oh, I might not always get it right, but I'll try. It's nearly like you feel some sort of sympathy. And it's like, no, it's OK. You're doing your best. This is fine.
Jess Herbert:
saw the same thing we actually had some of our customers like saying good job to our bot which was which was fun to see.
Danielle Murphy:
I know I'm like a proud parent whenever I see it has answered it correctly and I'm like oh this is weird but I'm so proud.
Marty Kausas:
One of the things to point out as well is when we do think about knowledge management it's not only you know pre-AI it was the source of truth for self-serving your own support, right? So, hey, if you're Googling or you go to the help center and search something, you can answer your own question. It's even more important now because that knowledge is now powering everyone's, like, drafting capabilities. So if they want, like, the AI to take a first pass, it's powering, like, the conversational agents. So really, it's crazy. When people come to us and they're like, hey, I want to implement AI, we're like, okay, what is your knowledge base look like? That's like our first question. And if they're not, they don't have a knowledge base, it's like, okay, well, what's the AI going to learn from? Like, where, where's the data going to pull from? And so the everything actually has to start with that knowledge base being really clean. And so anything you can do to just make it more efficient, that process of identifying what to write and actually writing the articles with AI article generation, as we've discussed, ends up being super important. I guess moving to upstream in that flow of identifying what to write, has anyone tried anything to help them figure out what content is missing or how to even think about what to prioritize?
Jess Herbert:
I'll start on that one. There's actually in the chat right now, there was a great question about exactly that. And I think all of us who use pylon are super lucky to have the AI option of knowledge gaps. So it looks back at our tickets, and it can say, you know, if a single ticket needed a new article, it can also find trends and certain images or ideas that need it. And then you can, again, generate an article and it will show you the 2010 tickets that are associated to it and use all of that data to generate the article. Before that, it was, oh, we realized we don't have the article when somebody asked the question and it goes into a backlog. And it waits until you can update it on your own. So I think those two of the gaps and the articles that are needed specific to a ticket really accelerate exactly what we need to do.
Marty Kausas:
Yeah. Anyone else? Danielle, have you thought about gaps or implemented those or planning to?
Danielle Murphy:
Yeah, so one way this was actually today and it was more for future proofing. We changed basically something in a module and any of our customers currently using it. We're going to need to update it so we had this whole conversation about like why they need to update it and what steps they need to take and stuff like that. So from that I could just generate an article that was specific for people upgrading from the previous one to the new one rather than setting up. I could also use the AI agent to say, okay, what customers have talked about this that may be using it. And then again, with pylon, I could just send a broadcast to all those customers specifically using it to be like, Hey, we see you're using this. Um, here's an article for how to upgrade it. And like, it's so nice when all the different features come together. And I think definitely for support, being proactive like that, where it's like, hey, this might, this is going to break in the future because of some changes, but this is what you can do now to get ahead of it. And this is a knowledge base articles specific to you. So they're not having to kind of like read around it and guess which parts are and aren't. It's just straight away there for them. And they don't run into an issue further down the line where they just get frustrated.
Marty Kausas:
Got it. And Zach, what about you? Process before and like, or I guess current and where you're going, has there been like, how do you guys identify what to write content for and how are you thinking about that moving forward?
Zac Hodgkin:
Yeah. I mean, for every, you know, support interaction, our TSC is trying to like follow, you know, understanding the customer issue, you know, using their resources, you know, search the KB, search Docs, search Slack, like ask someone else, ask engineering, you know, when they find that answer, you know, if it's outside of those resources, you know, create a new resource and use that to share with the customer. So we try to like, try to shorten the gap by incorporating that that process in the actual ticketing process. But, you know, like, like, you know, we've talked about already that there are things that we miss, there are things that aren't there. And I think You know, pylon has a really good way at identifying potential, you know, knowledge base articles are like missing gaps and like another another thing that falls into this that which actually isn't like a knowledge gap thing. It's the opposite problem, but One of the cool things that pylon can do.
Marty Kausas:
I know we're all just like pylon does these things, but I'm sorry, by the way, I like, it's just, it's when we talk about what's next, it's platform stuff, right? It's the tools.
Zac Hodgkin:
Yeah, it is. But I mean, like, pylon does do these things. things and they work really well. So we're going to talk about them. And so like one of the things that you can do is like when you're generating an article, you have the ability to see if there's potential duplicate articles. And so it helps you maintain the health of your knowledge base. You don't just have like a bunch of duplicate articles, which is like in the past, like something miserable to sort through, right? Like knowledge-based cleanup and maintenance is like a huge piece of it. and is kind of a nightmare unless you stay on top of it.
Marty Kausas:
Yeah. I'll also say, I'm sure everyone's had this experience before, both on the panel and listening. We hear the Franken desk story all the time. So you come into a new team, new company, and you have no context on everything that's been written. It's literally just a crazy hodgepodge of like potentially hundreds of articles, I think that's like a very obvious place where it could help, right? And just like, hey, like you, you're about to write an article, something already exists just like it, like, it just makes sense, right? It's one of those things where like, you weren't even doing it before manually, or maybe you would try, but it's just so hard to do manually and so like frustrating and not useful that, yeah. And Jess, I know you had something to say too.
Jess Herbert:
Yeah, the other side of that, too, is like I said, we're working on B2 of our Knowledge Center. So we have all of our old documents as we're rewriting them. And the duplicate article notification has come in really handy a couple times just to stop our customers from having confusion of like, oh, do I do it this way or this way? So yeah, it's fantastic.
Marty Kausas:
Awesome, okay, so we're about at that time where we should move to Q&A, and we have a bunch of questions. Just, I'm gonna summarize some of what we've discussed. So, and actually, sorry, there's one more question, one really important question. And if all of you could kind of just share quickly, how has the team received using the AI stuff? Like, is it, because some people are concerned, oh, this is going to make my life, or I don't know, it's like, fearing my job, or maybe it's going to make my job more robotic because I'm just pressing a bunch of buttons. Yeah, give honest reception on how it's been for the team internally.
Danielle Murphy:
For us, it has went down really well, especially because I think the AI agent and the knowledge base articles, they can help a lot with the repetitive stuff, but maybe not as much with the issues that take like a lot of investigation and troubleshooting. And for us as engineers, that's kind of the more fun issues. But a lot of times you get stuck with the repetitive issues on like, Explaining something or sending document links that you don't have as much time to deep dive on these issues that like you're excited to investigate So that's one way that we're looking at it. It's like it's taken away like of the grunt work and then we're available to like deep dive and troubleshoot the issues that we're actually excited about and that like you've no idea what could be there's a lot of different moving parts and so for like my support team we're very excited about that but for other teams such as like the product team like I'm very lucky in Spacelift that our product side knows there's a lot of value in our customer interactions But previously, they'd ask me, oh, what customers should we reach out to to ask about this feature we want to release? And I just have to take some time to think, OK, who have I had conversations with around this? Search Slack. But now with the Ask AI feature we have on our issues, that can just search and do it for me, which makes it a lot more accessible for me and for them. So we can grow a lot more of a customer-focused product. and like customer focus based on like real information and real conversations that we've had just in like the matter of minutes.
Jess Herbert:
Yeah, I totally agree with that. And our team has embraced all of the tools that we've been given. We were probably doing it ad hoc as before with chat GPT or whatever other tool we had. So having it all in one place and it goes beyond knowledge management. Like you said, Danielle, it's, it's the, you know, searching across all the tickets and not having to remember every single detail and finding things easy. That really gives us that little bit of edge to have some more time to do the fun things.
Marty Kausas:
Zach, any quick, yeah?
Zac Hodgkin:
No, same boat. The TLCs love it.
Marty Kausas:
Awesome. Cool. Okay. So let's hop into the Q&A section and we'll just start from the top. So, and by the way, if you don't see where the Q&A is on the right, there's like that sidebar and chat, I think is the default, but you can go to the Q&A is the third tab. And so first one. Okay. So Ravi, after an article gets generated from a customer conversation, do you have an approval mechanism before publishing? And if you're a big company, how do you manage all these generated articles? I think Zack, you're probably a good person on, you mentioned approval workflows already. Do you want to take this one?
Zac Hodgkin:
Yeah. And it's basically, you know, for us is what I mentioned earlier, you know, for specific people who have, you know, vetted content that has been solid for an extended amount of time, they can just publish things without having an approval product, without having to get someone's review. But for our TSEs that are more junior, we have an approval process where basically they ask for someone's review, they review it, and do a little back and forth if they need some more information. And then once it's up to snuff, we publish it that way. And so that's how we do it here at Panther.
Marty Kausas:
And just tactically, can you describe how does someone ask for review?
Zac Hodgkin:
Yeah, that's a good question. And so currently how we're doing it in Pylon is we use comments. And so we'll tag someone else and be like, hey, this is ready for review. And then the reviewer will be like, hey, it looks like blah, blah, blah, or is missing whatever. Can you check on this? The other person's like, all right, I did it. And then we're like, all right, good to go. Publish it. But what we love feature requests is having like an actual approval style and like in a queue within the system, because that's like, that's what we've done in the past with other knowledge base platforms. And so we're just waiting for pylon to give us that that type of workflow.
Marty Kausas:
Totally. As you were describing that, I was in pain listening to the comment workflow, which for anyone who's listening, basically imagine Google Docs style. You highlight something and tag someone, and then that'll DM them in Slack, hey, whatever message or comment that was left. So yeah.
Jess Herbert:
I mean, it's that same workflow, and it's a little painful, but it's not horrible.
Marty Kausas:
This is how you know this is the truth. The truth is coming out here. So it's not all AI magic. Okay, cool. Let's go to Jackie's question. So how to balance the use of AI and loss of hands-on skills, for example, using AI to create content versus writing the content yourself? Danielle, I feel like you might have an opinion here.
Danielle Murphy:
Yeah, definitely. So I hate writing content. And I think part of it is like a lot of our customers are based in America. And I realized the past few years, there's a big difference between like what I speak, which is kind of Irish English and American English. And I'll write an article previously or even a Slack message and I'll think it's fine. And I'll send it to someone and they'll be like, what does that mean? And I'll realize, oh, okay, that must be Irish English, but with AI, it kind of does it for me in American English and I think actually improves my skills more so than me losing them because I'm still reviewing the articles, I'm still making any changes but I'll read something and I'm like okay yeah saying it that way or like this style of flow actually works a lot better than what I would have had so yeah I definitely think it's actually improving my skills each time more so than losing them.
Marty Kausas:
And maybe the heart of the question might also be around like, uh, losing, uh, I guess, uh, let me see, I guess. Yeah. Hands-on skills. Okay. Yeah. It makes sense. Um, and I guess, does it feel like you need to know less about the topic, the article is being generated into, or do you still need to maintain the same level of like knowledge around it?
Danielle Murphy:
I'd say definitely for now we're still maintaining the same level of knowledge because anything that is in the article like we should make sure like that is the truth before we publish it but it's also being helpful if it adds something extra and we're like oh where did that come from and then we can see it was like referenced in this issue we can go look at that issue I'm like okay I didn't know like that feature also had this little extra part so again it's just improving it that way that like not only you're having to go from scratch and kind of okay this is all I know but I hope that's it whereas with the AI it has all the information available that you can then go through and be like okay maybe that's not 100% applicable like we kind of went a bit off tangent with that conversation but this is and I didn't actually know that so you're kind of constantly learning from it as well.
Marty Kausas:
Yeah, and one thing to point out is that if you're doing AI article generation, that's coming from responses that you've given to customers. So you basically have to already have had the background and information to know what to say, or someone on your team at least had to. And then really, the article generation is just taking work you've already done and just really putting it in a different format. that is just like more generalized. So maybe that's like another way to think about it. Jackie, another question for you. What ethical considerations should we keep in mind when using AI in the knowledge management space? I will open that up for whoever wants it.
Jess Herbert:
I think for us, it's ethical, yes. We also, like I said before, we have protected health information that is in our tickets that you know, we have to make sure that that is filtered out and is not included in anything that could be shared across customers or even publicly. So I think that's one of the biggest considerations we take, whether that's ethical or not. I'm not sure if that aligns.
Marty Kausas:
I think that counts. Daniel, okay, question directly for you. What are the sources of knowledge-based gaps and how do you figure out what topics you need to create an article? I think maybe you already covered this, but if you want to resummarize.
Danielle Murphy:
Yeah definitely so right now the way our flow works is that as we're closing out articles we'll either mark them as they need an article or maybe it's an article that needs to be updated or already is an article so that way we're kind of not missing anything obviously the downside of that is it's just as they're coming and in another way that I know now pylon has knowledge gaps that will kind of tell us what's happening so we don't need to worry about that as much but another way that we did it was with AI again it could automatically tag issues that maybe we had a lot of features but like user or a lot of issues around the user management feature so we could have our dashboard with which features were the highest case drivers. So user management, for example, we could click in that and be like, okay, we definitely need some articles around this, this issue comes up a lot, let's focus on that. So at the start, it was kind of looking at where are case drivers and what could we get articles out to reduce the amount of tickets that we were getting. And now it's just closing them as they come and making sure there's an article there. So we're constantly scaling up as it comes and it just makes our life a lot easier.
Marty Kausas:
Awesome. Okay. I know we're right at the end here. So, I want to be conscious of time. I really want to thank everyone for, yeah, panelists, Jess, Danielle, Zach, thank you so much for being on the panel. This has been great. Thank you for also giving Pylon some shoutouts. I really appreciate that. And yeah, for everyone, we'll send out a recording of this afterwards. Also, I know, like, as we were talking about this some of it is kind of unclear unless you can visually see some of like what was discussed like it's kind of crazy to be like okay when you say AI article generation do you actually mean AI writes the article or like how does that actually work in practice we'll send just like a recording kind of just explaining like how how you can run some of those workflows as well so Yeah, thank you, everyone, so much for coming. Thank you to the panelists. Really appreciate your time and sharing your wisdom. You are trailblazers here, obviously. And clearly, a lot of people want to hear from you. So yeah, appreciate your time. And thank you to everyone who joined. And shout out to Richard, support driven team, for making this happen. So thank you very much, everyone.
Danielle Murphy:
Thanks, everyone. Thank you.
View All Transcribation
This block contains a lot of text. Navigate carefully!
Transcription was done by AI. It may not be fully accurate.
Marty Kausas:
about this. And can everyone see my screen? Maybe Zach, you can confirm, Danielle? Cool. So, yeah, the topic today is really fun. And, you know, a lot of what you hear about or like webinars you see or things on LinkedIn end up being about when you think about AI in support or post sales, you're often thinking about AI deflection and trying to answer questions just automatically with AI. But there's a whole nother part of post-sales and support and success that includes knowledge management. And this actually ends up being, especially for B2B companies, an even bigger part of what AI can do, but it's much less talked about. So super excited to chat about this topic today with Zach, Danielle, Jess, and kind of go through their experiences, how they think about it. And yeah, super, super honored to do that. So the quick agenda that we're going to go through here. First, I have to shill my company. I'm sorry, that'll take a couple minutes in the beginning. So we'll go through just quick intro on Pylon and things that we're thinking about on the knowledge management side. We're going to go to the panel's discussion, you know, I'll go through intros. for everyone. And then we'll go to a Q&A. And if you have questions throughout, you can always just drop them into the Q&A section. We can see them live as you're either commenting or asking those questions that can be upvoted that we'll get to at the end. So yeah, really excited for this. And I'll dive in with my quick advertisement. I'm sorry, ad blocker won't work today. So Quick about Pylon, I'm one of the co-founders. We're building the first customer support platform for B2B companies. Our goal is really to bring together the entire post-sales team to work together in one place instead of having support, success, solutions, professional services all live in separate systems. We're bringing all those teams together and actually really just combining all those products into one. A lot of people like us because you can offer support over shared Slack channels or Microsoft Teams instead of the traditional ones. AI batteries are included, as you'll learn today, and of course, you want a modern tool. So quickly, just setting the stage for how a lot of people are thinking about AI and knowledge management. Let's go through quickly a lot of how people used to think about this. So you find gaps in your help center, your documentation, you write articles, and then you try to keep your knowledge base clean. And so before you might have a question and you shoot it into a Slack channel and say, hey, here's a gap. We should go write an article for this. And then this is missing and supposed to say something. We'll ignore it. Also, when you write articles, you would reference a bunch of old tickets and then write articles from scratch. And then in terms of keeping knowledge bases clean, you check for duplicate content. You try to maintain consistent formatting. And all of this is a lot of work. And rightfully so, because this ends up being really important for your customers. new ways that we're seeing the industry move are, hey, AI can help you discover what gaps exist in your knowledge base and help you find references to the questions that people are asking about those topics. AI can help draft articles for you. So not like completely just rewrite them and publish them, but kind of human in the loop style, just help you create things from scratch faster. And even based on previous responses you've given to other customers. And then finally, having AI potentially detect similar content so you're not rewriting the same thing, and then even writing articles into specific formats for consistency. Okay, so we're done with the ad. I'm really excited now to go to our panelists here. And so, yeah, we can go through one by one, and I'm going to ask each of you to introduce yourself, introduce your role, where are you based, and then what your company does to start. So Jess, we can start with you.
Jess Herbert:
Yeah, nice to meet everybody. Thank you for joining. My name is Jess Herbert. I work for Canvas Medical, and I'm our Senior Manager of Customer Experience. Canvas Medical is an EMR that is EMR at heart that is accelerating everyday medicine. So we have APIs, SDKs, database models, everything to help customize our users to help care for patients.
Marty Kausas:
Thanks, Danielle.
Danielle Murphy:
Hey everyone, I'm Danielle. I'm based in Dublin, Ireland. I'm the head of support engineering in Spacelift and Spacelift is essentially a platform that will help you automate and optimize your infrastructure as code workflows. So what we basically do is we make managing your cloud infrastructure a lot more efficient, scalable and secure.
Marty Kausas:
And maybe I'll add in one more part of the question. Danielle, how long have you been at Spacelift? And then also how long have you been in the industry overall?
Danielle Murphy:
So I've been in Spacelift coming up two years in next month. And for the DevOps industry, I'd say maybe two or three, but cloud infrastructure, I've been in for about five, six years. This is like more kind of specialized. And for this one, I'd say about two to three years.
Marty Kausas:
Awesome, and just really quick, rewinding to you, what about yourself?
Jess Herbert:
Yeah, I have been at Canvas for, it's going on seven years, I believe, this year. So I was actually our first customer-facing employee way back in 2018. And before that, I wasn't actually in customer support directly, but I was caring for patients in clinical care, where I then moved to kind of at the elbow or side by side support for EHR users and then kind of evolved to here.
Marty Kausas:
Awesome. And then Zach, sorry for rewinding there.
Zac Hodgkin:
Totally cool. My name is Zach and I do support things at Panther and Panther is a cloud sim. So basically if you don't know what a sim is, it basically collects a bunch of logs across different platforms and then you can run detections and it's, um, you know, security teams can use it to do investigations based on like things that they're seeing in the logging. Um, and then the other questions, I live in Ann Arbor, which I already said, I said that earlier. Um, and, um, I've been in security, well, cybersecurity for about eight years now. Um, and yeah, I think that's, Is there anything else that I need to answer?
Marty Kausas:
Yeah, that's great. And maybe starting with you on a question about knowledge management, like how do you think about knowledge management? What are like the core components as you think about it broadly?
Zac Hodgkin:
So I think like in terms of knowledge management, like in the most general terms, Um, you know, it's just the process of creating, updating and maintaining, you know, common questions or issues to like quickly solve problems, whether it's people on internal teams or like the, the best thing about it is like putting that stuff in front of customers for them to like self serve. So that way they like. never actually engage with support teams. But yeah, knowledge management is important outside of support teams and customers too. It can also help out your other internal teams.
Marty Kausas:
And can you talk about at Panther, what is the team structure? Is there anyone who specifically is dedicated to knowledge management or is it split across the team?
Zac Hodgkin:
So in terms of people that contribute To like our knowledge base is it's it's mostly our support team. We do have some people in across other teams that do contribute from time to time and then like how we have it split up. is we basically have like two tiers of contributors to the knowledge base and actually shout out to Sally since she's the one that implemented this at Panther. We have we have KD publishers who've been like vetted in the content that they're creating and basically they can, you know, just go and create new content. And then we also have just like regular contributors. And those people go through like an approval process to ensure that like all the content is meeting our standards to be like published to our customers or internally. But yeah, it's mostly, it's actually it's entirely our support team.
Marty Kausas:
Got it. Danielle, how does that compare to what you guys have set up? Is there, are there dedicated resources for knowledge management? Is it handled by you or other people on the support team?
Danielle Murphy:
Yeah, no, it's pretty similar. For us as well, it's mainly our support team. And I think I like it that way as well, because we have a lot of interaction. First, we're kind of familiar with how they like receiving information. So, for example, I think we're all guilty of not loving reading documentation, where our knowledge base articles are kind of like, okay, for this error, step one, two, three, where our documentation is more focused on, oh, let me tell you all about this feature. So it's the support team and we're a small but mighty support team. So there's three of us at the moment and we all contribute to it. And we don't have a dedicated person, but again, I'm quite a fan of that because it's just not one person's view or one person's information. It becomes like a shared collective. and I'm based in Europe and the two other guys in my team are both based in America so we'll always have someone on answering tickets as they come in but then we have it split up where you have your on queue time and your off queue time so right now for off queue time the focus is on getting our knowledge base articles updated so that's how we have it split and it's working quite well so far.
Marty Kausas:
And so do you have dedicated times of the day where, so you said on cue, off cue. So, Hey, I'm like, I'm, I'm the directly responsible individual for taking items off the queue during these hours. And then separately, um, there's an expectation that I'm writing content. Is that kind of how it works?
Danielle Murphy:
Yeah, exactly. So I could spend half my day, right? I'm on cue, any tickets that come in, I'll jump on, I'll answer. And then after that, when I have my off cue time, that's when I'll go and review what I've looked at. I'll be like, okay, this one doesn't have an article, let's get one created. And then same for the next person, when I'm off cue, they'll be on cue, so we still have coverage. And then when they're off cue, they'll do the same with their articles, or if that's maybe the time they're learning, and I just continue that.
Marty Kausas:
And can you help me give us, can you give us a sense, like, is it, is everyone 50, 50, um, in terms of like time split or is, are some people writing more content than others? Is there like a reason to, to distribute it differently?
Danielle Murphy:
So right now I'm 50, 50, and then the two guys on my team, they're 80, 20, kind of just the way our time zones are working out to make sure that we always have someone on the queue. So for anything urgent, we can just flag straight away.
Marty Kausas:
Awesome. So it's actually amazing. And I want to pause really quickly here. It's very rare that we find people who are so dedicated. Well, I guess the bigger you get, the more resources you generally have. But especially if you're a more lean team, it's hard to dedicate real resources towards writing content and keeping that up to date. So I just want to give a shout out. That's really hard to do. And it's awesome that you guys have prioritized that.
Danielle Murphy:
Well, Pylon has definitely helped us a lot there. And actually, with your introduction of Pylon, One thing that I find with using Pylon, and Marty did not pay me to say this, but I think Pylon really helps highlight the value that support brings to like the wider organization. I think sometimes I can get overlooked, but Pylon just makes it a lot more accessible for everyone to see and a lot more visible that, oh, this was how easy it was to do. So let me do this quickly so I can show the other teams what we can bring to the table.
Marty Kausas:
Awesome. Yeah, I will definitely double click on that. But Jess, I want to loop you in really quick. Give us a sense, what's the lay of the land at Canvas when it comes to how you think about knowledge management and what does the team look like?
Jess Herbert:
Yeah, so our team also very small but very mighty. There's three of us. And we actually encompass everything from implementation, onboarding, support, and anything in between. So we have one one, uh, support agent who puts the most of her effort into the knowledge center while also being in the queue. So I aspire to be able to get to a point where we can have on cue and off cue, um, shout out to Alyssa on that one. And, but we all contribute. So if there's something that, you know, I'm answering a ticket and something comes up, I make sure that we update the article right then also shout out to pylon that it's easy to do right away. Um, and. We all, even after an article is written, we all, the three of us will review it together, make sure that we have the same understanding so that we're supporting our customers in the same way.
Marty Kausas:
Awesome. And can you give us a sense as well, like you're, so you're in the healthcare space and I'm sure that's a little different. It might be a little different. I assume. Can you paint the picture of like what type of support you're getting maybe like general volume and like where people are submitting tickets?
Jess Herbert:
Yeah, absolutely. So we have some of our customers, we're the EHR, our customers are care delivery organizations. So they are actively caring for patients or supporting the people that are caring for patients. So there's urgency when they message in because it's likely impacting patient care right at that time. It's essential that we have this documentation and it's easy to read. It's quick to reference and unblocks them for what they need. Um, being healthcare, it's like I said, there's, there's urgency and we try to keep a very quick response time for that reason.
Marty Kausas:
And do you feel like there are any differences that, um, you have being in healthcare? Is it just like the priority and urgency of the requests or, uh, yeah. Do you think it's any different than traditional B2B companies?
Jess Herbert:
Yeah, I actually do because, um, there's the layer of patient safety. So every ticket that comes in, if it's a user having. you know, needing functionality, does that impact patient safety? Is that going to have a negative outcome on their patient? Could it cause harm? That's always at the top of our mind. And we always want to make sure we're enabling all of our users to have the resources that they need to give excellent care to their patients. So yeah, absolutely more complicated.
Marty Kausas:
Awesome. Okay, Zach, I want to shift back to you. Let's get into knowledge management and kind of where we're going. So I guess you talk about how you started to think about AI in your knowledge management workflows and where you see that starting to go.
Zac Hodgkin:
Yeah, that's a good question. Honestly, I never like had the like the the idea of like, oh, like, how can AI help me, it's kind of like, fallen into our lap when how we're going through, you know, just like going through the support process, like when, like, people started using chat, gbt, like our TSCs, you know, starting to starting to use that to, like, help improve their messaging, or like, creating agents with Chats and VT to like reference our knowledge base and our documentation. And then like this like final step of, well, it's not the final step, but like the current step that we're in where like the tooling that we're using, which is, is Pylon has like, you know, been listening to us and it's other customers and has implemented some like really cool stuff that have just like made the lives easier of our, our support agents. Um, And let me know if you want me to talk a little bit.
Marty Kausas:
Yeah, well, I guess people probably want specifics here. And so yeah, what are like the details? So you mentioned like, you know, AI starts coming out, chat GPT starts getting used by everyone. And by the way, was the team concerned at all about like security stuff with like people pasting into chat GPT? Was that ever like an internal discussion?
Zac Hodgkin:
Yeah, I think like the two big discussion points like with chat GPT was like, don't be an idiot in post, like confidential information into it or whatever. And then the other part was, you know, there, there are some times when I like CTSC is responding to customers. with like the most obvious copy and pasted like chat GPT answer. And I had to be like this, like you could have said the same thing in like one sentence instead of like posting four paragraphs or whatever. So yeah, like there's definitely some like.
Jess Herbert:
Oh, sorry, what were you saying? I hope this finds you well. And then it's a good big giveaway.
Zac Hodgkin:
Yeah, exactly. And so making sure that TSCs are like proofreading what they're actually sending to customers and like also making sure it sounds like them and not like, you know, someone else or whatever. I think we're like the, the two big things, uh, you know, when they started adopting chat GPT, but like one of the other cool things is we used to use grammar, grammarly internally and like that, like no one needed to use that anymore. Once the, once TSC started like,
Marty Kausas:
And one thing I also want to highlight that you said is that instead of saying, hey, how can AI help me, instead taking the approach of what are our current processes and seeing how AI can naturally be layered in, I feel like that's the correct way to approach it because a lot of people, and then also in that way you're you're just optimizing what you already have. So you know, you're going to get gains, right? It's like, hey, I do this process all the time. How do I make it faster? Right? Or how do I not have to do it all the time? How do I not have to think about it? I think that's really important. And I feel like across not just like knowledge management, but just everywhere, we should be thinking that way instead. So yeah, that that makes complete sense. Danielle, could you talk about how you've been thinking about AI knowledge management and Maybe we go a little deeper with you on what workflows have you noticed and what were you guys doing before and what have you started to try doing now with AI?
Danielle Murphy:
Yeah, so when I started, we weren't really using any AI for building out our processes. So it was a very painful time of going through Slack, reviewing cases, being like, okay, I'll document that, I'll document that.
Marty Kausas:
But AI is definitely- And really quick, were people sending messages into a Slack channel for you? Or when you say you were looking in Slack, what do you mean?
Danielle Murphy:
Yeah, so we support customers through Slack. So we'll have shared Slack channels with them and then we have a support triage channel where it will come true for us. So that was how I built up a lot of my knowledge when I started was going back by case on case on case and finding out that way because we didn't have like an internal knowledge base for me to go through there. So I was constantly finding out things that way and when I wanted to start building out an internal knowledge base, it was just like copy paste, linking conversations, thinking like, how will I award this? Does this make sense? That kind of way. AI has definitely helped big time there, especially with Pylon has a generate article feature. And for me, that has been such a game changer because with one click, I already have something to start with. and my manager always has the perfect metaphors to reference so I remember previously I was just saying that I really want to find the time to create all these articles or like an FAQ section but I was spending my time answering the questions, the exact same questions as they were coming in and so the metaphor he used was that cartoon of the two cavemen pushing a cart with square wheels and someone says, oh, like here's a circular wheel and they're like, oh no, thanks for too busy. But with AI, I feel like I have the AI putting on those circular wheels for me while I'm still pushing the cart. I don't actually need to stop myself and like pause what I'm doing to change. So that's been great and
Marty Kausas:
Daniel, just to go into more detail on that really quick, can you help us visualize when you say AI article generation, what is that pulling from? Is that all your past conversations or how does that work?
Danielle Murphy:
So the way we do it now is in Pylon, we'll have our issue. So that will be like the back and forth conversation with the customer. And previously I would have just kind of like took snippets from it on my own knowledge and like built out that way. But now with just the generate article button, it does it for me. So it will summarize the conversation and any steps I've mentioned, any screenshots I've put in or error messaging. And then obviously I know myself like, okay, we actually didn't touch on this specific error that comes up with this feature but I know it does come up so I'll just add it in there and like that's been such a game changer because I find that the hardest part is just starting so like thinking like okay how would I say this which way would this flow the best but with that I just have a starting place I can just like dump all my thoughts into it and it will fix it up for me so I'm not worrying now about like oh I have to do this I'll have to put time aside to do it I can just get started clean it up and I have something to work on.
Marty Kausas:
Yeah, 100%. It's actually incredible. We get this type of feedback all the time and obviously use this feature a lot. And for the audience, it sounds so simple that you almost gloss over what's actually happening. What Danielle just described is so simple and such a, if you've used ChatGPT ever, it's a very obvious use case. Like, hey, take an existing ticket that we already had, or a conversation that we've already had with a customer, and click a button to have AI take that and generalize it into an article draft that you can then edit and make into your own. So basically think like the same thing as copying a whole conversation that you've already had with the customer, put it into ChatGP, tell it to write a knowledge-based article, and then going from there. But that is operationalized now for you guys, right?
Danielle Murphy:
Yeah 100% and when I actually did an internal demo I went through the flow of someone reaching out and first the AI agent tries to answer it but it's something that we don't have a knowledge base article on so it gets routed to a support engineer then a support engineer gives the answer and What I had said in our demo was that normally that might have been where it ended. So the next person, if they run into the same error, they have to look through Slack, look through any internal like Notion docs that we may have. But now with AI, we can just click to generate an article. Boom, the article's done. Next time when a customer reaches out, the AI agent has that information to just give them an answer straight away. if they're running into the same error they ask a question straight away they have their solution they don't have to wait like maybe 20 minutes an hour for someone to be available they can just fix it and move on while either evaluating or just using their product and it makes it a lot more scalable especially for like a team size like ours it's just such a game changer to have something to do all for you
Marty Kausas:
That's awesome. In that demo that you showed to the team, the AI, I assume, was able to answer the question automatically the next time that same question was asked.
Zac Hodgkin:
Yeah, exactly.
Marty Kausas:
That's awesome. And I'm curious, maybe a question for all three of you at once. Do any of you have specific guidelines that you're trying to establish for like how articles should look or feel or tone or anything like that? And just maybe priority with you. Yeah.
Jess Herbert:
Yeah. Yeah. So we actually, this is our second version of our knowledge center. And our first version was, Um, there was no AI and it was all human input and it was a very specific template. Um, if anybody. Has seen the YouTube video of the exact directions challenge where the kids tried to tell their dad how to make a peanut butter and jelly sandwich. Um, and they don't include all the steps. So he does them literally. That was the basis that we used. And at the time I was like, if a, if a robot could do it, then that's how it should be written. and before AI was there. So that is coming back and is important because of our AI agents. And now we have a little bit looser template. It's a little bit more narrative. It's not that exact challenge. So the peanut butter and jelly sandwich would not be accurate, but it gives our users exactly what they need to know and in a confined way. We also did enable the pylon agent, which we found was answering pretty good most of the time, but we can definitely make our documentation and our knowledge better to make that agent even better. I think one thing, Danielle, you were talking about the article generation and how it takes that single article. One of the things that I'm super excited about is we also have this ability for an issue agent or an issue AI that will look past at our old responses across other tickets. So when we use that for a response and then create an article off of it, it's even more comprehensive using that generate article. So I'm super excited for that.
Marty Kausas:
So basically, you're using AI to answer a customer question based off of previous customer responses you've given. And then at the end of that, you're generating an article based off of what the AI drafted for you.
Jess Herbert:
Yeah, basically. Yeah, that's comprehensive all the way around. And it's super important, especially in healthcare, we have protected health information, like, we have to be very specific about what AI has access to what it doesn't, and how it's generated. So, again, these articles are just a starting place. And we still have to go through them with our same process to make sure they're accurate, there's no harm to patients, our users can, you know, continue on with their workflow in the day when they're seeing 20, 30, maybe more patients a day. So yeah, it's great to have that starting place to be able to have a lifting point.
Marty Kausas:
Yeah, and one thing I want to bring some attention to is, you know, you mentioned the issue copilot. So basically, hey, drafting responses to customers. We find that for B2B, that's just way more practical in a lot of cases, because you care about the accounts a lot more, you're higher touch, the accuracy is way more important. like if you have a million tickets per day, okay, like some people are going to get bad responses, that's okay. And frankly, the consumer questions are just much simpler. Our observation has been that if you're a consumer company, you probably have 80% of your questions that are pretty easily automatable with like the AI chatbot stuff that everyone's pushing and bring a ton of value there because it's easy to quickly deflect those questions. But in B2B, it's inverse. You have maybe 20% of your questions that are simple questions that could be actually just straight deflected based on content. And the rest are actually way more complicated. You have to go into a bunch of internal tools and pull data about the account and look at the history with them and what you've discussed before. And so Yeah, um, I guess Zach, um, coming back to you, I guess, how do you think about, um, like automate automated responses versus, um, kind of, uh, like co-piloty, uh, agent assist, um, functionality and, and how, um, yeah, how you think about that?
Zac Hodgkin:
Yeah. I mean, um, I'm, I'm pretty much like in the same boat from like what I've seen with, you know, an AI agent versus like an AI assist in messaging, you know, a lot of our customers, the questions that they ask are very custom, they're very specific. It's not super easy to just hit it with a KB article and nail it. Like the KB articles there to like assist. Um, But one of the things that I wanted, the thing that Jess called out about the assist bringing up a message that was used in a ticket and then a KB was created. Literally yesterday, one of our TSEs had shared that this pretty complicated question came in and they used the auto assist when generating their message. And it actually suggested a very complicated but very correct workflow from over a year ago from when that same TSE suggested it. And so I was able to solve it, done and dusted, and then also KB. So hearing her say that's the workflow is just so funny, because it's super true.
Marty Kausas:
And it's so cool. One thing philosophically that we believe internally is that, especially for B2B, you should be really, really sensitive about not cross-pollinating data between customers. And so, for example, if you do turn on an AI chatbot, In B2B you really want it just pulling from public data sources or things that like you've human in the loop reviewed and like it's out there, right? And anyone could see it versus when you do like the issue copilot, like drafting a response for you, you can use more data for those previous responses because you'll have eyes look over it and verify that nothing's going out. So that is pretty incredible. That's honestly amazing that that was something that happened. And yeah, I'm sure that's happened to us internally, but I have not heard that yet. Maybe we're just getting so used to these cool things that it's starting to get normalized really fast. But we do have these magical moments where it's like, oh, wow, that just works now. That's cool. So let's talk about what are things that you've tried that haven't worked? And I'll like kind of open this up to everyone. Are there any like workflows or AI things that like you were bullish on but like kind of weren't successful and any reasons why you think that? I'll kind of leave this open-ended for anyone.
Danielle Murphy:
Yeah, I'm happy to start with this one. So previously we were found that when we were trying to use an AI agent that it was exaggerating a lot. Now, as you said, like what B2B, a lot of the issues that we come across and the answer is it's not just copy and paste, like there's a lot of different moving parts. So it was exactly that for us where policies is a feature we have and it's done in a specific language. But the AI was given the answer for a different language, which would make the policy useless. So I was really afraid then of letting like an AI agent be customer facing, but the new workflow and one we use now with Pylon is that it's only using information from our knowledge base center. So then we know what it's saying is accurate. It's like relevant, it's updated. And if it's not confident on it, instead of trying to answer and just like ruining the trust with the customer, it will just let them know, like someone's going to answer this soon. And it's a lot more effective, but as well, because we have the AI agent, if someone's stuck, and it's something we have documented. Instead of them having to wait to get support from a human, they can get their issue resolved straight away if there is a knowledge base article.
Marty Kausas:
And going a little deeper into that as well, are you, when you have played around with agents, have you gated them for only segments of customers or have you segment like certain types of issues or have you, have you tried that or are you letting, or have you tried just letting it run for, for all like first, first passes?
Danielle Murphy:
Yeah, so the way we tried it previously was kind of all first passes, but it was just to the support engineer. So it was more like a co-pilot and it was like, this is what I would answer. And we were like, well, it's great that you didn't answer it with that because it's not accurate.
Jess Herbert:
We did, we did something similar and actually turned it on just for our Slack customers. Um, and it has like, as soon as the agent responds, it says, Hey, I'm your AI agent. Like, I'm going to be here. Don't worry. I'll, escalate to a human. And it was really well received, even when the answers weren't totally perfect. So our customers are very generous with their time and patience in relation to the AI agent.
Marty Kausas:
And one thing, Zach, I want to loop you in on something. So I think a lot of people in other industries, when they think about agents, they're making them try to impersonate humans, which I actually think does not work in support. People get really mad if you're like, hey, this is Anna from support. And it's like, you're tricking them, right? You're making it seem like there's someone who isn't. And so Zach, can you talk about like how you the profile that you created for Panther and yeah, what that Yeah, so I mean we
Zac Hodgkin:
We named it Peter Panther, and then we like did an AI generation for the image. And then in its canned messaging, you know, we obviously, you know, the thing that we wanted to call out, because, you know, our TSCs have really good relationships with their customers. We don't want, you know, an agent impacting the relationship our support team leaders have, you know, we called out the obvious of like, hey, I'm an agent and I'm not going to just like keep messaging you I'm going to like take a crack at this. And if it's not okay, like I'll get you over to like a smarter person that like can answer this for you or whatever. So like very obvious AI guy that is trying
Marty Kausas:
Yeah and we've seen that across the board. We actually initially for our own product when we like you're in the create agent flow we initially bought like this pack of you know human faces that you could like stick on to the AIs and we found no one was using them. People thought it was super creepy and it was just like not cool. And especially if you're trying to pretend to be a person and then you respond incorrectly, that just makes your company look so bad. Right. And so it's, it's better to just get ahead of it, even maybe say like AI beta and like that way, like it's like, okay, it's a beta, you guys are trying out, it's not the real thing. So highly, highly recommend when people do start trying to implement whatever conversational agents or AI chat bots that they don't try to fake a person. They try to make it clear that this is just a first pass attempt to help speed up the process.
Danielle Murphy:
We have similar one that it will say like, hey, I'm in training, I may not like always hit the mark, but I'm here to help. And it kind of reminds me, I think it was Duolingo that one of their notifications now after constantly trying to remind you to do your lessons is that, oh, these notifications don't seem to be working. I'll stop. And I think they'd said that with that notification, people kind of felt bad and was like, oh, no, no, it's not working. So they returned to their lessons. And it just makes me think that's quite similar when the AI bot's like, oh, I might not always get it right, but I'll try. It's nearly like you feel some sort of sympathy. And it's like, no, it's OK. You're doing your best. This is fine.
Jess Herbert:
saw the same thing we actually had some of our customers like saying good job to our bot which was which was fun to see.
Danielle Murphy:
I know I'm like a proud parent whenever I see it has answered it correctly and I'm like oh this is weird but I'm so proud.
Marty Kausas:
One of the things to point out as well is when we do think about knowledge management it's not only you know pre-AI it was the source of truth for self-serving your own support, right? So, hey, if you're Googling or you go to the help center and search something, you can answer your own question. It's even more important now because that knowledge is now powering everyone's, like, drafting capabilities. So if they want, like, the AI to take a first pass, it's powering, like, the conversational agents. So really, it's crazy. When people come to us and they're like, hey, I want to implement AI, we're like, okay, what is your knowledge base look like? That's like our first question. And if they're not, they don't have a knowledge base, it's like, okay, well, what's the AI going to learn from? Like, where, where's the data going to pull from? And so the everything actually has to start with that knowledge base being really clean. And so anything you can do to just make it more efficient, that process of identifying what to write and actually writing the articles with AI article generation, as we've discussed, ends up being super important. I guess moving to upstream in that flow of identifying what to write, has anyone tried anything to help them figure out what content is missing or how to even think about what to prioritize?
Jess Herbert:
I'll start on that one. There's actually in the chat right now, there was a great question about exactly that. And I think all of us who use pylon are super lucky to have the AI option of knowledge gaps. So it looks back at our tickets, and it can say, you know, if a single ticket needed a new article, it can also find trends and certain images or ideas that need it. And then you can, again, generate an article and it will show you the 2010 tickets that are associated to it and use all of that data to generate the article. Before that, it was, oh, we realized we don't have the article when somebody asked the question and it goes into a backlog. And it waits until you can update it on your own. So I think those two of the gaps and the articles that are needed specific to a ticket really accelerate exactly what we need to do.
Marty Kausas:
Yeah. Anyone else? Danielle, have you thought about gaps or implemented those or planning to?
Danielle Murphy:
Yeah, so one way this was actually today and it was more for future proofing. We changed basically something in a module and any of our customers currently using it. We're going to need to update it so we had this whole conversation about like why they need to update it and what steps they need to take and stuff like that. So from that I could just generate an article that was specific for people upgrading from the previous one to the new one rather than setting up. I could also use the AI agent to say, okay, what customers have talked about this that may be using it. And then again, with pylon, I could just send a broadcast to all those customers specifically using it to be like, Hey, we see you're using this. Um, here's an article for how to upgrade it. And like, it's so nice when all the different features come together. And I think definitely for support, being proactive like that, where it's like, hey, this might, this is going to break in the future because of some changes, but this is what you can do now to get ahead of it. And this is a knowledge base articles specific to you. So they're not having to kind of like read around it and guess which parts are and aren't. It's just straight away there for them. And they don't run into an issue further down the line where they just get frustrated.
Marty Kausas:
Got it. And Zach, what about you? Process before and like, or I guess current and where you're going, has there been like, how do you guys identify what to write content for and how are you thinking about that moving forward?
Zac Hodgkin:
Yeah. I mean, for every, you know, support interaction, our TSC is trying to like follow, you know, understanding the customer issue, you know, using their resources, you know, search the KB, search Docs, search Slack, like ask someone else, ask engineering, you know, when they find that answer, you know, if it's outside of those resources, you know, create a new resource and use that to share with the customer. So we try to like, try to shorten the gap by incorporating that that process in the actual ticketing process. But, you know, like, like, you know, we've talked about already that there are things that we miss, there are things that aren't there. And I think You know, pylon has a really good way at identifying potential, you know, knowledge base articles are like missing gaps and like another another thing that falls into this that which actually isn't like a knowledge gap thing. It's the opposite problem, but One of the cool things that pylon can do.
Marty Kausas:
I know we're all just like pylon does these things, but I'm sorry, by the way, I like, it's just, it's when we talk about what's next, it's platform stuff, right? It's the tools.
Zac Hodgkin:
Yeah, it is. But I mean, like, pylon does do these things. things and they work really well. So we're going to talk about them. And so like one of the things that you can do is like when you're generating an article, you have the ability to see if there's potential duplicate articles. And so it helps you maintain the health of your knowledge base. You don't just have like a bunch of duplicate articles, which is like in the past, like something miserable to sort through, right? Like knowledge-based cleanup and maintenance is like a huge piece of it. and is kind of a nightmare unless you stay on top of it.
Marty Kausas:
Yeah. I'll also say, I'm sure everyone's had this experience before, both on the panel and listening. We hear the Franken desk story all the time. So you come into a new team, new company, and you have no context on everything that's been written. It's literally just a crazy hodgepodge of like potentially hundreds of articles, I think that's like a very obvious place where it could help, right? And just like, hey, like you, you're about to write an article, something already exists just like it, like, it just makes sense, right? It's one of those things where like, you weren't even doing it before manually, or maybe you would try, but it's just so hard to do manually and so like frustrating and not useful that, yeah. And Jess, I know you had something to say too.
Jess Herbert:
Yeah, the other side of that, too, is like I said, we're working on B2 of our Knowledge Center. So we have all of our old documents as we're rewriting them. And the duplicate article notification has come in really handy a couple times just to stop our customers from having confusion of like, oh, do I do it this way or this way? So yeah, it's fantastic.
Marty Kausas:
Awesome, okay, so we're about at that time where we should move to Q&A, and we have a bunch of questions. Just, I'm gonna summarize some of what we've discussed. So, and actually, sorry, there's one more question, one really important question. And if all of you could kind of just share quickly, how has the team received using the AI stuff? Like, is it, because some people are concerned, oh, this is going to make my life, or I don't know, it's like, fearing my job, or maybe it's going to make my job more robotic because I'm just pressing a bunch of buttons. Yeah, give honest reception on how it's been for the team internally.
Danielle Murphy:
For us, it has went down really well, especially because I think the AI agent and the knowledge base articles, they can help a lot with the repetitive stuff, but maybe not as much with the issues that take like a lot of investigation and troubleshooting. And for us as engineers, that's kind of the more fun issues. But a lot of times you get stuck with the repetitive issues on like, Explaining something or sending document links that you don't have as much time to deep dive on these issues that like you're excited to investigate So that's one way that we're looking at it. It's like it's taken away like of the grunt work and then we're available to like deep dive and troubleshoot the issues that we're actually excited about and that like you've no idea what could be there's a lot of different moving parts and so for like my support team we're very excited about that but for other teams such as like the product team like I'm very lucky in Spacelift that our product side knows there's a lot of value in our customer interactions But previously, they'd ask me, oh, what customers should we reach out to to ask about this feature we want to release? And I just have to take some time to think, OK, who have I had conversations with around this? Search Slack. But now with the Ask AI feature we have on our issues, that can just search and do it for me, which makes it a lot more accessible for me and for them. So we can grow a lot more of a customer-focused product. and like customer focus based on like real information and real conversations that we've had just in like the matter of minutes.
Jess Herbert:
Yeah, I totally agree with that. And our team has embraced all of the tools that we've been given. We were probably doing it ad hoc as before with chat GPT or whatever other tool we had. So having it all in one place and it goes beyond knowledge management. Like you said, Danielle, it's, it's the, you know, searching across all the tickets and not having to remember every single detail and finding things easy. That really gives us that little bit of edge to have some more time to do the fun things.
Marty Kausas:
Zach, any quick, yeah?
Zac Hodgkin:
No, same boat. The TLCs love it.
Marty Kausas:
Awesome. Cool. Okay. So let's hop into the Q&A section and we'll just start from the top. So, and by the way, if you don't see where the Q&A is on the right, there's like that sidebar and chat, I think is the default, but you can go to the Q&A is the third tab. And so first one. Okay. So Ravi, after an article gets generated from a customer conversation, do you have an approval mechanism before publishing? And if you're a big company, how do you manage all these generated articles? I think Zack, you're probably a good person on, you mentioned approval workflows already. Do you want to take this one?
Zac Hodgkin:
Yeah. And it's basically, you know, for us is what I mentioned earlier, you know, for specific people who have, you know, vetted content that has been solid for an extended amount of time, they can just publish things without having an approval product, without having to get someone's review. But for our TSEs that are more junior, we have an approval process where basically they ask for someone's review, they review it, and do a little back and forth if they need some more information. And then once it's up to snuff, we publish it that way. And so that's how we do it here at Panther.
Marty Kausas:
And just tactically, can you describe how does someone ask for review?
Zac Hodgkin:
Yeah, that's a good question. And so currently how we're doing it in Pylon is we use comments. And so we'll tag someone else and be like, hey, this is ready for review. And then the reviewer will be like, hey, it looks like blah, blah, blah, or is missing whatever. Can you check on this? The other person's like, all right, I did it. And then we're like, all right, good to go. Publish it. But what we love feature requests is having like an actual approval style and like in a queue within the system, because that's like, that's what we've done in the past with other knowledge base platforms. And so we're just waiting for pylon to give us that that type of workflow.
Marty Kausas:
Totally. As you were describing that, I was in pain listening to the comment workflow, which for anyone who's listening, basically imagine Google Docs style. You highlight something and tag someone, and then that'll DM them in Slack, hey, whatever message or comment that was left. So yeah.
Jess Herbert:
I mean, it's that same workflow, and it's a little painful, but it's not horrible.
Marty Kausas:
This is how you know this is the truth. The truth is coming out here. So it's not all AI magic. Okay, cool. Let's go to Jackie's question. So how to balance the use of AI and loss of hands-on skills, for example, using AI to create content versus writing the content yourself? Danielle, I feel like you might have an opinion here.
Danielle Murphy:
Yeah, definitely. So I hate writing content. And I think part of it is like a lot of our customers are based in America. And I realized the past few years, there's a big difference between like what I speak, which is kind of Irish English and American English. And I'll write an article previously or even a Slack message and I'll think it's fine. And I'll send it to someone and they'll be like, what does that mean? And I'll realize, oh, okay, that must be Irish English, but with AI, it kind of does it for me in American English and I think actually improves my skills more so than me losing them because I'm still reviewing the articles, I'm still making any changes but I'll read something and I'm like okay yeah saying it that way or like this style of flow actually works a lot better than what I would have had so yeah I definitely think it's actually improving my skills each time more so than losing them.
Marty Kausas:
And maybe the heart of the question might also be around like, uh, losing, uh, I guess, uh, let me see, I guess. Yeah. Hands-on skills. Okay. Yeah. It makes sense. Um, and I guess, does it feel like you need to know less about the topic, the article is being generated into, or do you still need to maintain the same level of like knowledge around it?
Danielle Murphy:
I'd say definitely for now we're still maintaining the same level of knowledge because anything that is in the article like we should make sure like that is the truth before we publish it but it's also being helpful if it adds something extra and we're like oh where did that come from and then we can see it was like referenced in this issue we can go look at that issue I'm like okay I didn't know like that feature also had this little extra part so again it's just improving it that way that like not only you're having to go from scratch and kind of okay this is all I know but I hope that's it whereas with the AI it has all the information available that you can then go through and be like okay maybe that's not 100% applicable like we kind of went a bit off tangent with that conversation but this is and I didn't actually know that so you're kind of constantly learning from it as well.
Marty Kausas:
Yeah, and one thing to point out is that if you're doing AI article generation, that's coming from responses that you've given to customers. So you basically have to already have had the background and information to know what to say, or someone on your team at least had to. And then really, the article generation is just taking work you've already done and just really putting it in a different format. that is just like more generalized. So maybe that's like another way to think about it. Jackie, another question for you. What ethical considerations should we keep in mind when using AI in the knowledge management space? I will open that up for whoever wants it.
Jess Herbert:
I think for us, it's ethical, yes. We also, like I said before, we have protected health information that is in our tickets that you know, we have to make sure that that is filtered out and is not included in anything that could be shared across customers or even publicly. So I think that's one of the biggest considerations we take, whether that's ethical or not. I'm not sure if that aligns.
Marty Kausas:
I think that counts. Daniel, okay, question directly for you. What are the sources of knowledge-based gaps and how do you figure out what topics you need to create an article? I think maybe you already covered this, but if you want to resummarize.
Danielle Murphy:
Yeah definitely so right now the way our flow works is that as we're closing out articles we'll either mark them as they need an article or maybe it's an article that needs to be updated or already is an article so that way we're kind of not missing anything obviously the downside of that is it's just as they're coming and in another way that I know now pylon has knowledge gaps that will kind of tell us what's happening so we don't need to worry about that as much but another way that we did it was with AI again it could automatically tag issues that maybe we had a lot of features but like user or a lot of issues around the user management feature so we could have our dashboard with which features were the highest case drivers. So user management, for example, we could click in that and be like, okay, we definitely need some articles around this, this issue comes up a lot, let's focus on that. So at the start, it was kind of looking at where are case drivers and what could we get articles out to reduce the amount of tickets that we were getting. And now it's just closing them as they come and making sure there's an article there. So we're constantly scaling up as it comes and it just makes our life a lot easier.
Marty Kausas:
Awesome. Okay. I know we're right at the end here. So, I want to be conscious of time. I really want to thank everyone for, yeah, panelists, Jess, Danielle, Zach, thank you so much for being on the panel. This has been great. Thank you for also giving Pylon some shoutouts. I really appreciate that. And yeah, for everyone, we'll send out a recording of this afterwards. Also, I know, like, as we were talking about this some of it is kind of unclear unless you can visually see some of like what was discussed like it's kind of crazy to be like okay when you say AI article generation do you actually mean AI writes the article or like how does that actually work in practice we'll send just like a recording kind of just explaining like how how you can run some of those workflows as well so Yeah, thank you, everyone, so much for coming. Thank you to the panelists. Really appreciate your time and sharing your wisdom. You are trailblazers here, obviously. And clearly, a lot of people want to hear from you. So yeah, appreciate your time. And thank you to everyone who joined. And shout out to Richard, support driven team, for making this happen. So thank you very much, everyone.
Danielle Murphy:
Thanks, everyone. Thank you.
View All Transcribation
This block contains a lot of text. Navigate carefully!
Transcription was done by AI. It may not be fully accurate.
Marty Kausas:
about this. And can everyone see my screen? Maybe Zach, you can confirm, Danielle? Cool. So, yeah, the topic today is really fun. And, you know, a lot of what you hear about or like webinars you see or things on LinkedIn end up being about when you think about AI in support or post sales, you're often thinking about AI deflection and trying to answer questions just automatically with AI. But there's a whole nother part of post-sales and support and success that includes knowledge management. And this actually ends up being, especially for B2B companies, an even bigger part of what AI can do, but it's much less talked about. So super excited to chat about this topic today with Zach, Danielle, Jess, and kind of go through their experiences, how they think about it. And yeah, super, super honored to do that. So the quick agenda that we're going to go through here. First, I have to shill my company. I'm sorry, that'll take a couple minutes in the beginning. So we'll go through just quick intro on Pylon and things that we're thinking about on the knowledge management side. We're going to go to the panel's discussion, you know, I'll go through intros. for everyone. And then we'll go to a Q&A. And if you have questions throughout, you can always just drop them into the Q&A section. We can see them live as you're either commenting or asking those questions that can be upvoted that we'll get to at the end. So yeah, really excited for this. And I'll dive in with my quick advertisement. I'm sorry, ad blocker won't work today. So Quick about Pylon, I'm one of the co-founders. We're building the first customer support platform for B2B companies. Our goal is really to bring together the entire post-sales team to work together in one place instead of having support, success, solutions, professional services all live in separate systems. We're bringing all those teams together and actually really just combining all those products into one. A lot of people like us because you can offer support over shared Slack channels or Microsoft Teams instead of the traditional ones. AI batteries are included, as you'll learn today, and of course, you want a modern tool. So quickly, just setting the stage for how a lot of people are thinking about AI and knowledge management. Let's go through quickly a lot of how people used to think about this. So you find gaps in your help center, your documentation, you write articles, and then you try to keep your knowledge base clean. And so before you might have a question and you shoot it into a Slack channel and say, hey, here's a gap. We should go write an article for this. And then this is missing and supposed to say something. We'll ignore it. Also, when you write articles, you would reference a bunch of old tickets and then write articles from scratch. And then in terms of keeping knowledge bases clean, you check for duplicate content. You try to maintain consistent formatting. And all of this is a lot of work. And rightfully so, because this ends up being really important for your customers. new ways that we're seeing the industry move are, hey, AI can help you discover what gaps exist in your knowledge base and help you find references to the questions that people are asking about those topics. AI can help draft articles for you. So not like completely just rewrite them and publish them, but kind of human in the loop style, just help you create things from scratch faster. And even based on previous responses you've given to other customers. And then finally, having AI potentially detect similar content so you're not rewriting the same thing, and then even writing articles into specific formats for consistency. Okay, so we're done with the ad. I'm really excited now to go to our panelists here. And so, yeah, we can go through one by one, and I'm going to ask each of you to introduce yourself, introduce your role, where are you based, and then what your company does to start. So Jess, we can start with you.
Jess Herbert:
Yeah, nice to meet everybody. Thank you for joining. My name is Jess Herbert. I work for Canvas Medical, and I'm our Senior Manager of Customer Experience. Canvas Medical is an EMR that is EMR at heart that is accelerating everyday medicine. So we have APIs, SDKs, database models, everything to help customize our users to help care for patients.
Marty Kausas:
Thanks, Danielle.
Danielle Murphy:
Hey everyone, I'm Danielle. I'm based in Dublin, Ireland. I'm the head of support engineering in Spacelift and Spacelift is essentially a platform that will help you automate and optimize your infrastructure as code workflows. So what we basically do is we make managing your cloud infrastructure a lot more efficient, scalable and secure.
Marty Kausas:
And maybe I'll add in one more part of the question. Danielle, how long have you been at Spacelift? And then also how long have you been in the industry overall?
Danielle Murphy:
So I've been in Spacelift coming up two years in next month. And for the DevOps industry, I'd say maybe two or three, but cloud infrastructure, I've been in for about five, six years. This is like more kind of specialized. And for this one, I'd say about two to three years.
Marty Kausas:
Awesome, and just really quick, rewinding to you, what about yourself?
Jess Herbert:
Yeah, I have been at Canvas for, it's going on seven years, I believe, this year. So I was actually our first customer-facing employee way back in 2018. And before that, I wasn't actually in customer support directly, but I was caring for patients in clinical care, where I then moved to kind of at the elbow or side by side support for EHR users and then kind of evolved to here.
Marty Kausas:
Awesome. And then Zach, sorry for rewinding there.
Zac Hodgkin:
Totally cool. My name is Zach and I do support things at Panther and Panther is a cloud sim. So basically if you don't know what a sim is, it basically collects a bunch of logs across different platforms and then you can run detections and it's, um, you know, security teams can use it to do investigations based on like things that they're seeing in the logging. Um, and then the other questions, I live in Ann Arbor, which I already said, I said that earlier. Um, and, um, I've been in security, well, cybersecurity for about eight years now. Um, and yeah, I think that's, Is there anything else that I need to answer?
Marty Kausas:
Yeah, that's great. And maybe starting with you on a question about knowledge management, like how do you think about knowledge management? What are like the core components as you think about it broadly?
Zac Hodgkin:
So I think like in terms of knowledge management, like in the most general terms, Um, you know, it's just the process of creating, updating and maintaining, you know, common questions or issues to like quickly solve problems, whether it's people on internal teams or like the, the best thing about it is like putting that stuff in front of customers for them to like self serve. So that way they like. never actually engage with support teams. But yeah, knowledge management is important outside of support teams and customers too. It can also help out your other internal teams.
Marty Kausas:
And can you talk about at Panther, what is the team structure? Is there anyone who specifically is dedicated to knowledge management or is it split across the team?
Zac Hodgkin:
So in terms of people that contribute To like our knowledge base is it's it's mostly our support team. We do have some people in across other teams that do contribute from time to time and then like how we have it split up. is we basically have like two tiers of contributors to the knowledge base and actually shout out to Sally since she's the one that implemented this at Panther. We have we have KD publishers who've been like vetted in the content that they're creating and basically they can, you know, just go and create new content. And then we also have just like regular contributors. And those people go through like an approval process to ensure that like all the content is meeting our standards to be like published to our customers or internally. But yeah, it's mostly, it's actually it's entirely our support team.
Marty Kausas:
Got it. Danielle, how does that compare to what you guys have set up? Is there, are there dedicated resources for knowledge management? Is it handled by you or other people on the support team?
Danielle Murphy:
Yeah, no, it's pretty similar. For us as well, it's mainly our support team. And I think I like it that way as well, because we have a lot of interaction. First, we're kind of familiar with how they like receiving information. So, for example, I think we're all guilty of not loving reading documentation, where our knowledge base articles are kind of like, okay, for this error, step one, two, three, where our documentation is more focused on, oh, let me tell you all about this feature. So it's the support team and we're a small but mighty support team. So there's three of us at the moment and we all contribute to it. And we don't have a dedicated person, but again, I'm quite a fan of that because it's just not one person's view or one person's information. It becomes like a shared collective. and I'm based in Europe and the two other guys in my team are both based in America so we'll always have someone on answering tickets as they come in but then we have it split up where you have your on queue time and your off queue time so right now for off queue time the focus is on getting our knowledge base articles updated so that's how we have it split and it's working quite well so far.
Marty Kausas:
And so do you have dedicated times of the day where, so you said on cue, off cue. So, Hey, I'm like, I'm, I'm the directly responsible individual for taking items off the queue during these hours. And then separately, um, there's an expectation that I'm writing content. Is that kind of how it works?
Danielle Murphy:
Yeah, exactly. So I could spend half my day, right? I'm on cue, any tickets that come in, I'll jump on, I'll answer. And then after that, when I have my off cue time, that's when I'll go and review what I've looked at. I'll be like, okay, this one doesn't have an article, let's get one created. And then same for the next person, when I'm off cue, they'll be on cue, so we still have coverage. And then when they're off cue, they'll do the same with their articles, or if that's maybe the time they're learning, and I just continue that.
Marty Kausas:
And can you help me give us, can you give us a sense, like, is it, is everyone 50, 50, um, in terms of like time split or is, are some people writing more content than others? Is there like a reason to, to distribute it differently?
Danielle Murphy:
So right now I'm 50, 50, and then the two guys on my team, they're 80, 20, kind of just the way our time zones are working out to make sure that we always have someone on the queue. So for anything urgent, we can just flag straight away.
Marty Kausas:
Awesome. So it's actually amazing. And I want to pause really quickly here. It's very rare that we find people who are so dedicated. Well, I guess the bigger you get, the more resources you generally have. But especially if you're a more lean team, it's hard to dedicate real resources towards writing content and keeping that up to date. So I just want to give a shout out. That's really hard to do. And it's awesome that you guys have prioritized that.
Danielle Murphy:
Well, Pylon has definitely helped us a lot there. And actually, with your introduction of Pylon, One thing that I find with using Pylon, and Marty did not pay me to say this, but I think Pylon really helps highlight the value that support brings to like the wider organization. I think sometimes I can get overlooked, but Pylon just makes it a lot more accessible for everyone to see and a lot more visible that, oh, this was how easy it was to do. So let me do this quickly so I can show the other teams what we can bring to the table.
Marty Kausas:
Awesome. Yeah, I will definitely double click on that. But Jess, I want to loop you in really quick. Give us a sense, what's the lay of the land at Canvas when it comes to how you think about knowledge management and what does the team look like?
Jess Herbert:
Yeah, so our team also very small but very mighty. There's three of us. And we actually encompass everything from implementation, onboarding, support, and anything in between. So we have one one, uh, support agent who puts the most of her effort into the knowledge center while also being in the queue. So I aspire to be able to get to a point where we can have on cue and off cue, um, shout out to Alyssa on that one. And, but we all contribute. So if there's something that, you know, I'm answering a ticket and something comes up, I make sure that we update the article right then also shout out to pylon that it's easy to do right away. Um, and. We all, even after an article is written, we all, the three of us will review it together, make sure that we have the same understanding so that we're supporting our customers in the same way.
Marty Kausas:
Awesome. And can you give us a sense as well, like you're, so you're in the healthcare space and I'm sure that's a little different. It might be a little different. I assume. Can you paint the picture of like what type of support you're getting maybe like general volume and like where people are submitting tickets?
Jess Herbert:
Yeah, absolutely. So we have some of our customers, we're the EHR, our customers are care delivery organizations. So they are actively caring for patients or supporting the people that are caring for patients. So there's urgency when they message in because it's likely impacting patient care right at that time. It's essential that we have this documentation and it's easy to read. It's quick to reference and unblocks them for what they need. Um, being healthcare, it's like I said, there's, there's urgency and we try to keep a very quick response time for that reason.
Marty Kausas:
And do you feel like there are any differences that, um, you have being in healthcare? Is it just like the priority and urgency of the requests or, uh, yeah. Do you think it's any different than traditional B2B companies?
Jess Herbert:
Yeah, I actually do because, um, there's the layer of patient safety. So every ticket that comes in, if it's a user having. you know, needing functionality, does that impact patient safety? Is that going to have a negative outcome on their patient? Could it cause harm? That's always at the top of our mind. And we always want to make sure we're enabling all of our users to have the resources that they need to give excellent care to their patients. So yeah, absolutely more complicated.
Marty Kausas:
Awesome. Okay, Zach, I want to shift back to you. Let's get into knowledge management and kind of where we're going. So I guess you talk about how you started to think about AI in your knowledge management workflows and where you see that starting to go.
Zac Hodgkin:
Yeah, that's a good question. Honestly, I never like had the like the the idea of like, oh, like, how can AI help me, it's kind of like, fallen into our lap when how we're going through, you know, just like going through the support process, like when, like, people started using chat, gbt, like our TSCs, you know, starting to starting to use that to, like, help improve their messaging, or like, creating agents with Chats and VT to like reference our knowledge base and our documentation. And then like this like final step of, well, it's not the final step, but like the current step that we're in where like the tooling that we're using, which is, is Pylon has like, you know, been listening to us and it's other customers and has implemented some like really cool stuff that have just like made the lives easier of our, our support agents. Um, And let me know if you want me to talk a little bit.
Marty Kausas:
Yeah, well, I guess people probably want specifics here. And so yeah, what are like the details? So you mentioned like, you know, AI starts coming out, chat GPT starts getting used by everyone. And by the way, was the team concerned at all about like security stuff with like people pasting into chat GPT? Was that ever like an internal discussion?
Zac Hodgkin:
Yeah, I think like the two big discussion points like with chat GPT was like, don't be an idiot in post, like confidential information into it or whatever. And then the other part was, you know, there, there are some times when I like CTSC is responding to customers. with like the most obvious copy and pasted like chat GPT answer. And I had to be like this, like you could have said the same thing in like one sentence instead of like posting four paragraphs or whatever. So yeah, like there's definitely some like.
Jess Herbert:
Oh, sorry, what were you saying? I hope this finds you well. And then it's a good big giveaway.
Zac Hodgkin:
Yeah, exactly. And so making sure that TSCs are like proofreading what they're actually sending to customers and like also making sure it sounds like them and not like, you know, someone else or whatever. I think we're like the, the two big things, uh, you know, when they started adopting chat GPT, but like one of the other cool things is we used to use grammar, grammarly internally and like that, like no one needed to use that anymore. Once the, once TSC started like,
Marty Kausas:
And one thing I also want to highlight that you said is that instead of saying, hey, how can AI help me, instead taking the approach of what are our current processes and seeing how AI can naturally be layered in, I feel like that's the correct way to approach it because a lot of people, and then also in that way you're you're just optimizing what you already have. So you know, you're going to get gains, right? It's like, hey, I do this process all the time. How do I make it faster? Right? Or how do I not have to do it all the time? How do I not have to think about it? I think that's really important. And I feel like across not just like knowledge management, but just everywhere, we should be thinking that way instead. So yeah, that that makes complete sense. Danielle, could you talk about how you've been thinking about AI knowledge management and Maybe we go a little deeper with you on what workflows have you noticed and what were you guys doing before and what have you started to try doing now with AI?
Danielle Murphy:
Yeah, so when I started, we weren't really using any AI for building out our processes. So it was a very painful time of going through Slack, reviewing cases, being like, okay, I'll document that, I'll document that.
Marty Kausas:
But AI is definitely- And really quick, were people sending messages into a Slack channel for you? Or when you say you were looking in Slack, what do you mean?
Danielle Murphy:
Yeah, so we support customers through Slack. So we'll have shared Slack channels with them and then we have a support triage channel where it will come true for us. So that was how I built up a lot of my knowledge when I started was going back by case on case on case and finding out that way because we didn't have like an internal knowledge base for me to go through there. So I was constantly finding out things that way and when I wanted to start building out an internal knowledge base, it was just like copy paste, linking conversations, thinking like, how will I award this? Does this make sense? That kind of way. AI has definitely helped big time there, especially with Pylon has a generate article feature. And for me, that has been such a game changer because with one click, I already have something to start with. and my manager always has the perfect metaphors to reference so I remember previously I was just saying that I really want to find the time to create all these articles or like an FAQ section but I was spending my time answering the questions, the exact same questions as they were coming in and so the metaphor he used was that cartoon of the two cavemen pushing a cart with square wheels and someone says, oh, like here's a circular wheel and they're like, oh no, thanks for too busy. But with AI, I feel like I have the AI putting on those circular wheels for me while I'm still pushing the cart. I don't actually need to stop myself and like pause what I'm doing to change. So that's been great and
Marty Kausas:
Daniel, just to go into more detail on that really quick, can you help us visualize when you say AI article generation, what is that pulling from? Is that all your past conversations or how does that work?
Danielle Murphy:
So the way we do it now is in Pylon, we'll have our issue. So that will be like the back and forth conversation with the customer. And previously I would have just kind of like took snippets from it on my own knowledge and like built out that way. But now with just the generate article button, it does it for me. So it will summarize the conversation and any steps I've mentioned, any screenshots I've put in or error messaging. And then obviously I know myself like, okay, we actually didn't touch on this specific error that comes up with this feature but I know it does come up so I'll just add it in there and like that's been such a game changer because I find that the hardest part is just starting so like thinking like okay how would I say this which way would this flow the best but with that I just have a starting place I can just like dump all my thoughts into it and it will fix it up for me so I'm not worrying now about like oh I have to do this I'll have to put time aside to do it I can just get started clean it up and I have something to work on.
Marty Kausas:
Yeah, 100%. It's actually incredible. We get this type of feedback all the time and obviously use this feature a lot. And for the audience, it sounds so simple that you almost gloss over what's actually happening. What Danielle just described is so simple and such a, if you've used ChatGPT ever, it's a very obvious use case. Like, hey, take an existing ticket that we already had, or a conversation that we've already had with a customer, and click a button to have AI take that and generalize it into an article draft that you can then edit and make into your own. So basically think like the same thing as copying a whole conversation that you've already had with the customer, put it into ChatGP, tell it to write a knowledge-based article, and then going from there. But that is operationalized now for you guys, right?
Danielle Murphy:
Yeah 100% and when I actually did an internal demo I went through the flow of someone reaching out and first the AI agent tries to answer it but it's something that we don't have a knowledge base article on so it gets routed to a support engineer then a support engineer gives the answer and What I had said in our demo was that normally that might have been where it ended. So the next person, if they run into the same error, they have to look through Slack, look through any internal like Notion docs that we may have. But now with AI, we can just click to generate an article. Boom, the article's done. Next time when a customer reaches out, the AI agent has that information to just give them an answer straight away. if they're running into the same error they ask a question straight away they have their solution they don't have to wait like maybe 20 minutes an hour for someone to be available they can just fix it and move on while either evaluating or just using their product and it makes it a lot more scalable especially for like a team size like ours it's just such a game changer to have something to do all for you
Marty Kausas:
That's awesome. In that demo that you showed to the team, the AI, I assume, was able to answer the question automatically the next time that same question was asked.
Zac Hodgkin:
Yeah, exactly.
Marty Kausas:
That's awesome. And I'm curious, maybe a question for all three of you at once. Do any of you have specific guidelines that you're trying to establish for like how articles should look or feel or tone or anything like that? And just maybe priority with you. Yeah.
Jess Herbert:
Yeah. Yeah. So we actually, this is our second version of our knowledge center. And our first version was, Um, there was no AI and it was all human input and it was a very specific template. Um, if anybody. Has seen the YouTube video of the exact directions challenge where the kids tried to tell their dad how to make a peanut butter and jelly sandwich. Um, and they don't include all the steps. So he does them literally. That was the basis that we used. And at the time I was like, if a, if a robot could do it, then that's how it should be written. and before AI was there. So that is coming back and is important because of our AI agents. And now we have a little bit looser template. It's a little bit more narrative. It's not that exact challenge. So the peanut butter and jelly sandwich would not be accurate, but it gives our users exactly what they need to know and in a confined way. We also did enable the pylon agent, which we found was answering pretty good most of the time, but we can definitely make our documentation and our knowledge better to make that agent even better. I think one thing, Danielle, you were talking about the article generation and how it takes that single article. One of the things that I'm super excited about is we also have this ability for an issue agent or an issue AI that will look past at our old responses across other tickets. So when we use that for a response and then create an article off of it, it's even more comprehensive using that generate article. So I'm super excited for that.
Marty Kausas:
So basically, you're using AI to answer a customer question based off of previous customer responses you've given. And then at the end of that, you're generating an article based off of what the AI drafted for you.
Jess Herbert:
Yeah, basically. Yeah, that's comprehensive all the way around. And it's super important, especially in healthcare, we have protected health information, like, we have to be very specific about what AI has access to what it doesn't, and how it's generated. So, again, these articles are just a starting place. And we still have to go through them with our same process to make sure they're accurate, there's no harm to patients, our users can, you know, continue on with their workflow in the day when they're seeing 20, 30, maybe more patients a day. So yeah, it's great to have that starting place to be able to have a lifting point.
Marty Kausas:
Yeah, and one thing I want to bring some attention to is, you know, you mentioned the issue copilot. So basically, hey, drafting responses to customers. We find that for B2B, that's just way more practical in a lot of cases, because you care about the accounts a lot more, you're higher touch, the accuracy is way more important. like if you have a million tickets per day, okay, like some people are going to get bad responses, that's okay. And frankly, the consumer questions are just much simpler. Our observation has been that if you're a consumer company, you probably have 80% of your questions that are pretty easily automatable with like the AI chatbot stuff that everyone's pushing and bring a ton of value there because it's easy to quickly deflect those questions. But in B2B, it's inverse. You have maybe 20% of your questions that are simple questions that could be actually just straight deflected based on content. And the rest are actually way more complicated. You have to go into a bunch of internal tools and pull data about the account and look at the history with them and what you've discussed before. And so Yeah, um, I guess Zach, um, coming back to you, I guess, how do you think about, um, like automate automated responses versus, um, kind of, uh, like co-piloty, uh, agent assist, um, functionality and, and how, um, yeah, how you think about that?
Zac Hodgkin:
Yeah. I mean, um, I'm, I'm pretty much like in the same boat from like what I've seen with, you know, an AI agent versus like an AI assist in messaging, you know, a lot of our customers, the questions that they ask are very custom, they're very specific. It's not super easy to just hit it with a KB article and nail it. Like the KB articles there to like assist. Um, But one of the things that I wanted, the thing that Jess called out about the assist bringing up a message that was used in a ticket and then a KB was created. Literally yesterday, one of our TSEs had shared that this pretty complicated question came in and they used the auto assist when generating their message. And it actually suggested a very complicated but very correct workflow from over a year ago from when that same TSE suggested it. And so I was able to solve it, done and dusted, and then also KB. So hearing her say that's the workflow is just so funny, because it's super true.
Marty Kausas:
And it's so cool. One thing philosophically that we believe internally is that, especially for B2B, you should be really, really sensitive about not cross-pollinating data between customers. And so, for example, if you do turn on an AI chatbot, In B2B you really want it just pulling from public data sources or things that like you've human in the loop reviewed and like it's out there, right? And anyone could see it versus when you do like the issue copilot, like drafting a response for you, you can use more data for those previous responses because you'll have eyes look over it and verify that nothing's going out. So that is pretty incredible. That's honestly amazing that that was something that happened. And yeah, I'm sure that's happened to us internally, but I have not heard that yet. Maybe we're just getting so used to these cool things that it's starting to get normalized really fast. But we do have these magical moments where it's like, oh, wow, that just works now. That's cool. So let's talk about what are things that you've tried that haven't worked? And I'll like kind of open this up to everyone. Are there any like workflows or AI things that like you were bullish on but like kind of weren't successful and any reasons why you think that? I'll kind of leave this open-ended for anyone.
Danielle Murphy:
Yeah, I'm happy to start with this one. So previously we were found that when we were trying to use an AI agent that it was exaggerating a lot. Now, as you said, like what B2B, a lot of the issues that we come across and the answer is it's not just copy and paste, like there's a lot of different moving parts. So it was exactly that for us where policies is a feature we have and it's done in a specific language. But the AI was given the answer for a different language, which would make the policy useless. So I was really afraid then of letting like an AI agent be customer facing, but the new workflow and one we use now with Pylon is that it's only using information from our knowledge base center. So then we know what it's saying is accurate. It's like relevant, it's updated. And if it's not confident on it, instead of trying to answer and just like ruining the trust with the customer, it will just let them know, like someone's going to answer this soon. And it's a lot more effective, but as well, because we have the AI agent, if someone's stuck, and it's something we have documented. Instead of them having to wait to get support from a human, they can get their issue resolved straight away if there is a knowledge base article.
Marty Kausas:
And going a little deeper into that as well, are you, when you have played around with agents, have you gated them for only segments of customers or have you segment like certain types of issues or have you, have you tried that or are you letting, or have you tried just letting it run for, for all like first, first passes?
Danielle Murphy:
Yeah, so the way we tried it previously was kind of all first passes, but it was just to the support engineer. So it was more like a co-pilot and it was like, this is what I would answer. And we were like, well, it's great that you didn't answer it with that because it's not accurate.
Jess Herbert:
We did, we did something similar and actually turned it on just for our Slack customers. Um, and it has like, as soon as the agent responds, it says, Hey, I'm your AI agent. Like, I'm going to be here. Don't worry. I'll, escalate to a human. And it was really well received, even when the answers weren't totally perfect. So our customers are very generous with their time and patience in relation to the AI agent.
Marty Kausas:
And one thing, Zach, I want to loop you in on something. So I think a lot of people in other industries, when they think about agents, they're making them try to impersonate humans, which I actually think does not work in support. People get really mad if you're like, hey, this is Anna from support. And it's like, you're tricking them, right? You're making it seem like there's someone who isn't. And so Zach, can you talk about like how you the profile that you created for Panther and yeah, what that Yeah, so I mean we
Zac Hodgkin:
We named it Peter Panther, and then we like did an AI generation for the image. And then in its canned messaging, you know, we obviously, you know, the thing that we wanted to call out, because, you know, our TSCs have really good relationships with their customers. We don't want, you know, an agent impacting the relationship our support team leaders have, you know, we called out the obvious of like, hey, I'm an agent and I'm not going to just like keep messaging you I'm going to like take a crack at this. And if it's not okay, like I'll get you over to like a smarter person that like can answer this for you or whatever. So like very obvious AI guy that is trying
Marty Kausas:
Yeah and we've seen that across the board. We actually initially for our own product when we like you're in the create agent flow we initially bought like this pack of you know human faces that you could like stick on to the AIs and we found no one was using them. People thought it was super creepy and it was just like not cool. And especially if you're trying to pretend to be a person and then you respond incorrectly, that just makes your company look so bad. Right. And so it's, it's better to just get ahead of it, even maybe say like AI beta and like that way, like it's like, okay, it's a beta, you guys are trying out, it's not the real thing. So highly, highly recommend when people do start trying to implement whatever conversational agents or AI chat bots that they don't try to fake a person. They try to make it clear that this is just a first pass attempt to help speed up the process.
Danielle Murphy:
We have similar one that it will say like, hey, I'm in training, I may not like always hit the mark, but I'm here to help. And it kind of reminds me, I think it was Duolingo that one of their notifications now after constantly trying to remind you to do your lessons is that, oh, these notifications don't seem to be working. I'll stop. And I think they'd said that with that notification, people kind of felt bad and was like, oh, no, no, it's not working. So they returned to their lessons. And it just makes me think that's quite similar when the AI bot's like, oh, I might not always get it right, but I'll try. It's nearly like you feel some sort of sympathy. And it's like, no, it's OK. You're doing your best. This is fine.
Jess Herbert:
saw the same thing we actually had some of our customers like saying good job to our bot which was which was fun to see.
Danielle Murphy:
I know I'm like a proud parent whenever I see it has answered it correctly and I'm like oh this is weird but I'm so proud.
Marty Kausas:
One of the things to point out as well is when we do think about knowledge management it's not only you know pre-AI it was the source of truth for self-serving your own support, right? So, hey, if you're Googling or you go to the help center and search something, you can answer your own question. It's even more important now because that knowledge is now powering everyone's, like, drafting capabilities. So if they want, like, the AI to take a first pass, it's powering, like, the conversational agents. So really, it's crazy. When people come to us and they're like, hey, I want to implement AI, we're like, okay, what is your knowledge base look like? That's like our first question. And if they're not, they don't have a knowledge base, it's like, okay, well, what's the AI going to learn from? Like, where, where's the data going to pull from? And so the everything actually has to start with that knowledge base being really clean. And so anything you can do to just make it more efficient, that process of identifying what to write and actually writing the articles with AI article generation, as we've discussed, ends up being super important. I guess moving to upstream in that flow of identifying what to write, has anyone tried anything to help them figure out what content is missing or how to even think about what to prioritize?
Jess Herbert:
I'll start on that one. There's actually in the chat right now, there was a great question about exactly that. And I think all of us who use pylon are super lucky to have the AI option of knowledge gaps. So it looks back at our tickets, and it can say, you know, if a single ticket needed a new article, it can also find trends and certain images or ideas that need it. And then you can, again, generate an article and it will show you the 2010 tickets that are associated to it and use all of that data to generate the article. Before that, it was, oh, we realized we don't have the article when somebody asked the question and it goes into a backlog. And it waits until you can update it on your own. So I think those two of the gaps and the articles that are needed specific to a ticket really accelerate exactly what we need to do.
Marty Kausas:
Yeah. Anyone else? Danielle, have you thought about gaps or implemented those or planning to?
Danielle Murphy:
Yeah, so one way this was actually today and it was more for future proofing. We changed basically something in a module and any of our customers currently using it. We're going to need to update it so we had this whole conversation about like why they need to update it and what steps they need to take and stuff like that. So from that I could just generate an article that was specific for people upgrading from the previous one to the new one rather than setting up. I could also use the AI agent to say, okay, what customers have talked about this that may be using it. And then again, with pylon, I could just send a broadcast to all those customers specifically using it to be like, Hey, we see you're using this. Um, here's an article for how to upgrade it. And like, it's so nice when all the different features come together. And I think definitely for support, being proactive like that, where it's like, hey, this might, this is going to break in the future because of some changes, but this is what you can do now to get ahead of it. And this is a knowledge base articles specific to you. So they're not having to kind of like read around it and guess which parts are and aren't. It's just straight away there for them. And they don't run into an issue further down the line where they just get frustrated.
Marty Kausas:
Got it. And Zach, what about you? Process before and like, or I guess current and where you're going, has there been like, how do you guys identify what to write content for and how are you thinking about that moving forward?
Zac Hodgkin:
Yeah. I mean, for every, you know, support interaction, our TSC is trying to like follow, you know, understanding the customer issue, you know, using their resources, you know, search the KB, search Docs, search Slack, like ask someone else, ask engineering, you know, when they find that answer, you know, if it's outside of those resources, you know, create a new resource and use that to share with the customer. So we try to like, try to shorten the gap by incorporating that that process in the actual ticketing process. But, you know, like, like, you know, we've talked about already that there are things that we miss, there are things that aren't there. And I think You know, pylon has a really good way at identifying potential, you know, knowledge base articles are like missing gaps and like another another thing that falls into this that which actually isn't like a knowledge gap thing. It's the opposite problem, but One of the cool things that pylon can do.
Marty Kausas:
I know we're all just like pylon does these things, but I'm sorry, by the way, I like, it's just, it's when we talk about what's next, it's platform stuff, right? It's the tools.
Zac Hodgkin:
Yeah, it is. But I mean, like, pylon does do these things. things and they work really well. So we're going to talk about them. And so like one of the things that you can do is like when you're generating an article, you have the ability to see if there's potential duplicate articles. And so it helps you maintain the health of your knowledge base. You don't just have like a bunch of duplicate articles, which is like in the past, like something miserable to sort through, right? Like knowledge-based cleanup and maintenance is like a huge piece of it. and is kind of a nightmare unless you stay on top of it.
Marty Kausas:
Yeah. I'll also say, I'm sure everyone's had this experience before, both on the panel and listening. We hear the Franken desk story all the time. So you come into a new team, new company, and you have no context on everything that's been written. It's literally just a crazy hodgepodge of like potentially hundreds of articles, I think that's like a very obvious place where it could help, right? And just like, hey, like you, you're about to write an article, something already exists just like it, like, it just makes sense, right? It's one of those things where like, you weren't even doing it before manually, or maybe you would try, but it's just so hard to do manually and so like frustrating and not useful that, yeah. And Jess, I know you had something to say too.
Jess Herbert:
Yeah, the other side of that, too, is like I said, we're working on B2 of our Knowledge Center. So we have all of our old documents as we're rewriting them. And the duplicate article notification has come in really handy a couple times just to stop our customers from having confusion of like, oh, do I do it this way or this way? So yeah, it's fantastic.
Marty Kausas:
Awesome, okay, so we're about at that time where we should move to Q&A, and we have a bunch of questions. Just, I'm gonna summarize some of what we've discussed. So, and actually, sorry, there's one more question, one really important question. And if all of you could kind of just share quickly, how has the team received using the AI stuff? Like, is it, because some people are concerned, oh, this is going to make my life, or I don't know, it's like, fearing my job, or maybe it's going to make my job more robotic because I'm just pressing a bunch of buttons. Yeah, give honest reception on how it's been for the team internally.
Danielle Murphy:
For us, it has went down really well, especially because I think the AI agent and the knowledge base articles, they can help a lot with the repetitive stuff, but maybe not as much with the issues that take like a lot of investigation and troubleshooting. And for us as engineers, that's kind of the more fun issues. But a lot of times you get stuck with the repetitive issues on like, Explaining something or sending document links that you don't have as much time to deep dive on these issues that like you're excited to investigate So that's one way that we're looking at it. It's like it's taken away like of the grunt work and then we're available to like deep dive and troubleshoot the issues that we're actually excited about and that like you've no idea what could be there's a lot of different moving parts and so for like my support team we're very excited about that but for other teams such as like the product team like I'm very lucky in Spacelift that our product side knows there's a lot of value in our customer interactions But previously, they'd ask me, oh, what customers should we reach out to to ask about this feature we want to release? And I just have to take some time to think, OK, who have I had conversations with around this? Search Slack. But now with the Ask AI feature we have on our issues, that can just search and do it for me, which makes it a lot more accessible for me and for them. So we can grow a lot more of a customer-focused product. and like customer focus based on like real information and real conversations that we've had just in like the matter of minutes.
Jess Herbert:
Yeah, I totally agree with that. And our team has embraced all of the tools that we've been given. We were probably doing it ad hoc as before with chat GPT or whatever other tool we had. So having it all in one place and it goes beyond knowledge management. Like you said, Danielle, it's, it's the, you know, searching across all the tickets and not having to remember every single detail and finding things easy. That really gives us that little bit of edge to have some more time to do the fun things.
Marty Kausas:
Zach, any quick, yeah?
Zac Hodgkin:
No, same boat. The TLCs love it.
Marty Kausas:
Awesome. Cool. Okay. So let's hop into the Q&A section and we'll just start from the top. So, and by the way, if you don't see where the Q&A is on the right, there's like that sidebar and chat, I think is the default, but you can go to the Q&A is the third tab. And so first one. Okay. So Ravi, after an article gets generated from a customer conversation, do you have an approval mechanism before publishing? And if you're a big company, how do you manage all these generated articles? I think Zack, you're probably a good person on, you mentioned approval workflows already. Do you want to take this one?
Zac Hodgkin:
Yeah. And it's basically, you know, for us is what I mentioned earlier, you know, for specific people who have, you know, vetted content that has been solid for an extended amount of time, they can just publish things without having an approval product, without having to get someone's review. But for our TSEs that are more junior, we have an approval process where basically they ask for someone's review, they review it, and do a little back and forth if they need some more information. And then once it's up to snuff, we publish it that way. And so that's how we do it here at Panther.
Marty Kausas:
And just tactically, can you describe how does someone ask for review?
Zac Hodgkin:
Yeah, that's a good question. And so currently how we're doing it in Pylon is we use comments. And so we'll tag someone else and be like, hey, this is ready for review. And then the reviewer will be like, hey, it looks like blah, blah, blah, or is missing whatever. Can you check on this? The other person's like, all right, I did it. And then we're like, all right, good to go. Publish it. But what we love feature requests is having like an actual approval style and like in a queue within the system, because that's like, that's what we've done in the past with other knowledge base platforms. And so we're just waiting for pylon to give us that that type of workflow.
Marty Kausas:
Totally. As you were describing that, I was in pain listening to the comment workflow, which for anyone who's listening, basically imagine Google Docs style. You highlight something and tag someone, and then that'll DM them in Slack, hey, whatever message or comment that was left. So yeah.
Jess Herbert:
I mean, it's that same workflow, and it's a little painful, but it's not horrible.
Marty Kausas:
This is how you know this is the truth. The truth is coming out here. So it's not all AI magic. Okay, cool. Let's go to Jackie's question. So how to balance the use of AI and loss of hands-on skills, for example, using AI to create content versus writing the content yourself? Danielle, I feel like you might have an opinion here.
Danielle Murphy:
Yeah, definitely. So I hate writing content. And I think part of it is like a lot of our customers are based in America. And I realized the past few years, there's a big difference between like what I speak, which is kind of Irish English and American English. And I'll write an article previously or even a Slack message and I'll think it's fine. And I'll send it to someone and they'll be like, what does that mean? And I'll realize, oh, okay, that must be Irish English, but with AI, it kind of does it for me in American English and I think actually improves my skills more so than me losing them because I'm still reviewing the articles, I'm still making any changes but I'll read something and I'm like okay yeah saying it that way or like this style of flow actually works a lot better than what I would have had so yeah I definitely think it's actually improving my skills each time more so than losing them.
Marty Kausas:
And maybe the heart of the question might also be around like, uh, losing, uh, I guess, uh, let me see, I guess. Yeah. Hands-on skills. Okay. Yeah. It makes sense. Um, and I guess, does it feel like you need to know less about the topic, the article is being generated into, or do you still need to maintain the same level of like knowledge around it?
Danielle Murphy:
I'd say definitely for now we're still maintaining the same level of knowledge because anything that is in the article like we should make sure like that is the truth before we publish it but it's also being helpful if it adds something extra and we're like oh where did that come from and then we can see it was like referenced in this issue we can go look at that issue I'm like okay I didn't know like that feature also had this little extra part so again it's just improving it that way that like not only you're having to go from scratch and kind of okay this is all I know but I hope that's it whereas with the AI it has all the information available that you can then go through and be like okay maybe that's not 100% applicable like we kind of went a bit off tangent with that conversation but this is and I didn't actually know that so you're kind of constantly learning from it as well.
Marty Kausas:
Yeah, and one thing to point out is that if you're doing AI article generation, that's coming from responses that you've given to customers. So you basically have to already have had the background and information to know what to say, or someone on your team at least had to. And then really, the article generation is just taking work you've already done and just really putting it in a different format. that is just like more generalized. So maybe that's like another way to think about it. Jackie, another question for you. What ethical considerations should we keep in mind when using AI in the knowledge management space? I will open that up for whoever wants it.
Jess Herbert:
I think for us, it's ethical, yes. We also, like I said before, we have protected health information that is in our tickets that you know, we have to make sure that that is filtered out and is not included in anything that could be shared across customers or even publicly. So I think that's one of the biggest considerations we take, whether that's ethical or not. I'm not sure if that aligns.
Marty Kausas:
I think that counts. Daniel, okay, question directly for you. What are the sources of knowledge-based gaps and how do you figure out what topics you need to create an article? I think maybe you already covered this, but if you want to resummarize.
Danielle Murphy:
Yeah definitely so right now the way our flow works is that as we're closing out articles we'll either mark them as they need an article or maybe it's an article that needs to be updated or already is an article so that way we're kind of not missing anything obviously the downside of that is it's just as they're coming and in another way that I know now pylon has knowledge gaps that will kind of tell us what's happening so we don't need to worry about that as much but another way that we did it was with AI again it could automatically tag issues that maybe we had a lot of features but like user or a lot of issues around the user management feature so we could have our dashboard with which features were the highest case drivers. So user management, for example, we could click in that and be like, okay, we definitely need some articles around this, this issue comes up a lot, let's focus on that. So at the start, it was kind of looking at where are case drivers and what could we get articles out to reduce the amount of tickets that we were getting. And now it's just closing them as they come and making sure there's an article there. So we're constantly scaling up as it comes and it just makes our life a lot easier.
Marty Kausas:
Awesome. Okay. I know we're right at the end here. So, I want to be conscious of time. I really want to thank everyone for, yeah, panelists, Jess, Danielle, Zach, thank you so much for being on the panel. This has been great. Thank you for also giving Pylon some shoutouts. I really appreciate that. And yeah, for everyone, we'll send out a recording of this afterwards. Also, I know, like, as we were talking about this some of it is kind of unclear unless you can visually see some of like what was discussed like it's kind of crazy to be like okay when you say AI article generation do you actually mean AI writes the article or like how does that actually work in practice we'll send just like a recording kind of just explaining like how how you can run some of those workflows as well so Yeah, thank you, everyone, so much for coming. Thank you to the panelists. Really appreciate your time and sharing your wisdom. You are trailblazers here, obviously. And clearly, a lot of people want to hear from you. So yeah, appreciate your time. And thank you to everyone who joined. And shout out to Richard, support driven team, for making this happen. So thank you very much, everyone.
Danielle Murphy:
Thanks, everyone. Thank you.
View All Transcribation
This block contains a lot of text. Navigate carefully!
Transcription was done by AI. It may not be fully accurate.
Marty Kausas:
about this. And can everyone see my screen? Maybe Zach, you can confirm, Danielle? Cool. So, yeah, the topic today is really fun. And, you know, a lot of what you hear about or like webinars you see or things on LinkedIn end up being about when you think about AI in support or post sales, you're often thinking about AI deflection and trying to answer questions just automatically with AI. But there's a whole nother part of post-sales and support and success that includes knowledge management. And this actually ends up being, especially for B2B companies, an even bigger part of what AI can do, but it's much less talked about. So super excited to chat about this topic today with Zach, Danielle, Jess, and kind of go through their experiences, how they think about it. And yeah, super, super honored to do that. So the quick agenda that we're going to go through here. First, I have to shill my company. I'm sorry, that'll take a couple minutes in the beginning. So we'll go through just quick intro on Pylon and things that we're thinking about on the knowledge management side. We're going to go to the panel's discussion, you know, I'll go through intros. for everyone. And then we'll go to a Q&A. And if you have questions throughout, you can always just drop them into the Q&A section. We can see them live as you're either commenting or asking those questions that can be upvoted that we'll get to at the end. So yeah, really excited for this. And I'll dive in with my quick advertisement. I'm sorry, ad blocker won't work today. So Quick about Pylon, I'm one of the co-founders. We're building the first customer support platform for B2B companies. Our goal is really to bring together the entire post-sales team to work together in one place instead of having support, success, solutions, professional services all live in separate systems. We're bringing all those teams together and actually really just combining all those products into one. A lot of people like us because you can offer support over shared Slack channels or Microsoft Teams instead of the traditional ones. AI batteries are included, as you'll learn today, and of course, you want a modern tool. So quickly, just setting the stage for how a lot of people are thinking about AI and knowledge management. Let's go through quickly a lot of how people used to think about this. So you find gaps in your help center, your documentation, you write articles, and then you try to keep your knowledge base clean. And so before you might have a question and you shoot it into a Slack channel and say, hey, here's a gap. We should go write an article for this. And then this is missing and supposed to say something. We'll ignore it. Also, when you write articles, you would reference a bunch of old tickets and then write articles from scratch. And then in terms of keeping knowledge bases clean, you check for duplicate content. You try to maintain consistent formatting. And all of this is a lot of work. And rightfully so, because this ends up being really important for your customers. new ways that we're seeing the industry move are, hey, AI can help you discover what gaps exist in your knowledge base and help you find references to the questions that people are asking about those topics. AI can help draft articles for you. So not like completely just rewrite them and publish them, but kind of human in the loop style, just help you create things from scratch faster. And even based on previous responses you've given to other customers. And then finally, having AI potentially detect similar content so you're not rewriting the same thing, and then even writing articles into specific formats for consistency. Okay, so we're done with the ad. I'm really excited now to go to our panelists here. And so, yeah, we can go through one by one, and I'm going to ask each of you to introduce yourself, introduce your role, where are you based, and then what your company does to start. So Jess, we can start with you.
Jess Herbert:
Yeah, nice to meet everybody. Thank you for joining. My name is Jess Herbert. I work for Canvas Medical, and I'm our Senior Manager of Customer Experience. Canvas Medical is an EMR that is EMR at heart that is accelerating everyday medicine. So we have APIs, SDKs, database models, everything to help customize our users to help care for patients.
Marty Kausas:
Thanks, Danielle.
Danielle Murphy:
Hey everyone, I'm Danielle. I'm based in Dublin, Ireland. I'm the head of support engineering in Spacelift and Spacelift is essentially a platform that will help you automate and optimize your infrastructure as code workflows. So what we basically do is we make managing your cloud infrastructure a lot more efficient, scalable and secure.
Marty Kausas:
And maybe I'll add in one more part of the question. Danielle, how long have you been at Spacelift? And then also how long have you been in the industry overall?
Danielle Murphy:
So I've been in Spacelift coming up two years in next month. And for the DevOps industry, I'd say maybe two or three, but cloud infrastructure, I've been in for about five, six years. This is like more kind of specialized. And for this one, I'd say about two to three years.
Marty Kausas:
Awesome, and just really quick, rewinding to you, what about yourself?
Jess Herbert:
Yeah, I have been at Canvas for, it's going on seven years, I believe, this year. So I was actually our first customer-facing employee way back in 2018. And before that, I wasn't actually in customer support directly, but I was caring for patients in clinical care, where I then moved to kind of at the elbow or side by side support for EHR users and then kind of evolved to here.
Marty Kausas:
Awesome. And then Zach, sorry for rewinding there.
Zac Hodgkin:
Totally cool. My name is Zach and I do support things at Panther and Panther is a cloud sim. So basically if you don't know what a sim is, it basically collects a bunch of logs across different platforms and then you can run detections and it's, um, you know, security teams can use it to do investigations based on like things that they're seeing in the logging. Um, and then the other questions, I live in Ann Arbor, which I already said, I said that earlier. Um, and, um, I've been in security, well, cybersecurity for about eight years now. Um, and yeah, I think that's, Is there anything else that I need to answer?
Marty Kausas:
Yeah, that's great. And maybe starting with you on a question about knowledge management, like how do you think about knowledge management? What are like the core components as you think about it broadly?
Zac Hodgkin:
So I think like in terms of knowledge management, like in the most general terms, Um, you know, it's just the process of creating, updating and maintaining, you know, common questions or issues to like quickly solve problems, whether it's people on internal teams or like the, the best thing about it is like putting that stuff in front of customers for them to like self serve. So that way they like. never actually engage with support teams. But yeah, knowledge management is important outside of support teams and customers too. It can also help out your other internal teams.
Marty Kausas:
And can you talk about at Panther, what is the team structure? Is there anyone who specifically is dedicated to knowledge management or is it split across the team?
Zac Hodgkin:
So in terms of people that contribute To like our knowledge base is it's it's mostly our support team. We do have some people in across other teams that do contribute from time to time and then like how we have it split up. is we basically have like two tiers of contributors to the knowledge base and actually shout out to Sally since she's the one that implemented this at Panther. We have we have KD publishers who've been like vetted in the content that they're creating and basically they can, you know, just go and create new content. And then we also have just like regular contributors. And those people go through like an approval process to ensure that like all the content is meeting our standards to be like published to our customers or internally. But yeah, it's mostly, it's actually it's entirely our support team.
Marty Kausas:
Got it. Danielle, how does that compare to what you guys have set up? Is there, are there dedicated resources for knowledge management? Is it handled by you or other people on the support team?
Danielle Murphy:
Yeah, no, it's pretty similar. For us as well, it's mainly our support team. And I think I like it that way as well, because we have a lot of interaction. First, we're kind of familiar with how they like receiving information. So, for example, I think we're all guilty of not loving reading documentation, where our knowledge base articles are kind of like, okay, for this error, step one, two, three, where our documentation is more focused on, oh, let me tell you all about this feature. So it's the support team and we're a small but mighty support team. So there's three of us at the moment and we all contribute to it. And we don't have a dedicated person, but again, I'm quite a fan of that because it's just not one person's view or one person's information. It becomes like a shared collective. and I'm based in Europe and the two other guys in my team are both based in America so we'll always have someone on answering tickets as they come in but then we have it split up where you have your on queue time and your off queue time so right now for off queue time the focus is on getting our knowledge base articles updated so that's how we have it split and it's working quite well so far.
Marty Kausas:
And so do you have dedicated times of the day where, so you said on cue, off cue. So, Hey, I'm like, I'm, I'm the directly responsible individual for taking items off the queue during these hours. And then separately, um, there's an expectation that I'm writing content. Is that kind of how it works?
Danielle Murphy:
Yeah, exactly. So I could spend half my day, right? I'm on cue, any tickets that come in, I'll jump on, I'll answer. And then after that, when I have my off cue time, that's when I'll go and review what I've looked at. I'll be like, okay, this one doesn't have an article, let's get one created. And then same for the next person, when I'm off cue, they'll be on cue, so we still have coverage. And then when they're off cue, they'll do the same with their articles, or if that's maybe the time they're learning, and I just continue that.
Marty Kausas:
And can you help me give us, can you give us a sense, like, is it, is everyone 50, 50, um, in terms of like time split or is, are some people writing more content than others? Is there like a reason to, to distribute it differently?
Danielle Murphy:
So right now I'm 50, 50, and then the two guys on my team, they're 80, 20, kind of just the way our time zones are working out to make sure that we always have someone on the queue. So for anything urgent, we can just flag straight away.
Marty Kausas:
Awesome. So it's actually amazing. And I want to pause really quickly here. It's very rare that we find people who are so dedicated. Well, I guess the bigger you get, the more resources you generally have. But especially if you're a more lean team, it's hard to dedicate real resources towards writing content and keeping that up to date. So I just want to give a shout out. That's really hard to do. And it's awesome that you guys have prioritized that.
Danielle Murphy:
Well, Pylon has definitely helped us a lot there. And actually, with your introduction of Pylon, One thing that I find with using Pylon, and Marty did not pay me to say this, but I think Pylon really helps highlight the value that support brings to like the wider organization. I think sometimes I can get overlooked, but Pylon just makes it a lot more accessible for everyone to see and a lot more visible that, oh, this was how easy it was to do. So let me do this quickly so I can show the other teams what we can bring to the table.
Marty Kausas:
Awesome. Yeah, I will definitely double click on that. But Jess, I want to loop you in really quick. Give us a sense, what's the lay of the land at Canvas when it comes to how you think about knowledge management and what does the team look like?
Jess Herbert:
Yeah, so our team also very small but very mighty. There's three of us. And we actually encompass everything from implementation, onboarding, support, and anything in between. So we have one one, uh, support agent who puts the most of her effort into the knowledge center while also being in the queue. So I aspire to be able to get to a point where we can have on cue and off cue, um, shout out to Alyssa on that one. And, but we all contribute. So if there's something that, you know, I'm answering a ticket and something comes up, I make sure that we update the article right then also shout out to pylon that it's easy to do right away. Um, and. We all, even after an article is written, we all, the three of us will review it together, make sure that we have the same understanding so that we're supporting our customers in the same way.
Marty Kausas:
Awesome. And can you give us a sense as well, like you're, so you're in the healthcare space and I'm sure that's a little different. It might be a little different. I assume. Can you paint the picture of like what type of support you're getting maybe like general volume and like where people are submitting tickets?
Jess Herbert:
Yeah, absolutely. So we have some of our customers, we're the EHR, our customers are care delivery organizations. So they are actively caring for patients or supporting the people that are caring for patients. So there's urgency when they message in because it's likely impacting patient care right at that time. It's essential that we have this documentation and it's easy to read. It's quick to reference and unblocks them for what they need. Um, being healthcare, it's like I said, there's, there's urgency and we try to keep a very quick response time for that reason.
Marty Kausas:
And do you feel like there are any differences that, um, you have being in healthcare? Is it just like the priority and urgency of the requests or, uh, yeah. Do you think it's any different than traditional B2B companies?
Jess Herbert:
Yeah, I actually do because, um, there's the layer of patient safety. So every ticket that comes in, if it's a user having. you know, needing functionality, does that impact patient safety? Is that going to have a negative outcome on their patient? Could it cause harm? That's always at the top of our mind. And we always want to make sure we're enabling all of our users to have the resources that they need to give excellent care to their patients. So yeah, absolutely more complicated.
Marty Kausas:
Awesome. Okay, Zach, I want to shift back to you. Let's get into knowledge management and kind of where we're going. So I guess you talk about how you started to think about AI in your knowledge management workflows and where you see that starting to go.
Zac Hodgkin:
Yeah, that's a good question. Honestly, I never like had the like the the idea of like, oh, like, how can AI help me, it's kind of like, fallen into our lap when how we're going through, you know, just like going through the support process, like when, like, people started using chat, gbt, like our TSCs, you know, starting to starting to use that to, like, help improve their messaging, or like, creating agents with Chats and VT to like reference our knowledge base and our documentation. And then like this like final step of, well, it's not the final step, but like the current step that we're in where like the tooling that we're using, which is, is Pylon has like, you know, been listening to us and it's other customers and has implemented some like really cool stuff that have just like made the lives easier of our, our support agents. Um, And let me know if you want me to talk a little bit.
Marty Kausas:
Yeah, well, I guess people probably want specifics here. And so yeah, what are like the details? So you mentioned like, you know, AI starts coming out, chat GPT starts getting used by everyone. And by the way, was the team concerned at all about like security stuff with like people pasting into chat GPT? Was that ever like an internal discussion?
Zac Hodgkin:
Yeah, I think like the two big discussion points like with chat GPT was like, don't be an idiot in post, like confidential information into it or whatever. And then the other part was, you know, there, there are some times when I like CTSC is responding to customers. with like the most obvious copy and pasted like chat GPT answer. And I had to be like this, like you could have said the same thing in like one sentence instead of like posting four paragraphs or whatever. So yeah, like there's definitely some like.
Jess Herbert:
Oh, sorry, what were you saying? I hope this finds you well. And then it's a good big giveaway.
Zac Hodgkin:
Yeah, exactly. And so making sure that TSCs are like proofreading what they're actually sending to customers and like also making sure it sounds like them and not like, you know, someone else or whatever. I think we're like the, the two big things, uh, you know, when they started adopting chat GPT, but like one of the other cool things is we used to use grammar, grammarly internally and like that, like no one needed to use that anymore. Once the, once TSC started like,
Marty Kausas:
And one thing I also want to highlight that you said is that instead of saying, hey, how can AI help me, instead taking the approach of what are our current processes and seeing how AI can naturally be layered in, I feel like that's the correct way to approach it because a lot of people, and then also in that way you're you're just optimizing what you already have. So you know, you're going to get gains, right? It's like, hey, I do this process all the time. How do I make it faster? Right? Or how do I not have to do it all the time? How do I not have to think about it? I think that's really important. And I feel like across not just like knowledge management, but just everywhere, we should be thinking that way instead. So yeah, that that makes complete sense. Danielle, could you talk about how you've been thinking about AI knowledge management and Maybe we go a little deeper with you on what workflows have you noticed and what were you guys doing before and what have you started to try doing now with AI?
Danielle Murphy:
Yeah, so when I started, we weren't really using any AI for building out our processes. So it was a very painful time of going through Slack, reviewing cases, being like, okay, I'll document that, I'll document that.
Marty Kausas:
But AI is definitely- And really quick, were people sending messages into a Slack channel for you? Or when you say you were looking in Slack, what do you mean?
Danielle Murphy:
Yeah, so we support customers through Slack. So we'll have shared Slack channels with them and then we have a support triage channel where it will come true for us. So that was how I built up a lot of my knowledge when I started was going back by case on case on case and finding out that way because we didn't have like an internal knowledge base for me to go through there. So I was constantly finding out things that way and when I wanted to start building out an internal knowledge base, it was just like copy paste, linking conversations, thinking like, how will I award this? Does this make sense? That kind of way. AI has definitely helped big time there, especially with Pylon has a generate article feature. And for me, that has been such a game changer because with one click, I already have something to start with. and my manager always has the perfect metaphors to reference so I remember previously I was just saying that I really want to find the time to create all these articles or like an FAQ section but I was spending my time answering the questions, the exact same questions as they were coming in and so the metaphor he used was that cartoon of the two cavemen pushing a cart with square wheels and someone says, oh, like here's a circular wheel and they're like, oh no, thanks for too busy. But with AI, I feel like I have the AI putting on those circular wheels for me while I'm still pushing the cart. I don't actually need to stop myself and like pause what I'm doing to change. So that's been great and
Marty Kausas:
Daniel, just to go into more detail on that really quick, can you help us visualize when you say AI article generation, what is that pulling from? Is that all your past conversations or how does that work?
Danielle Murphy:
So the way we do it now is in Pylon, we'll have our issue. So that will be like the back and forth conversation with the customer. And previously I would have just kind of like took snippets from it on my own knowledge and like built out that way. But now with just the generate article button, it does it for me. So it will summarize the conversation and any steps I've mentioned, any screenshots I've put in or error messaging. And then obviously I know myself like, okay, we actually didn't touch on this specific error that comes up with this feature but I know it does come up so I'll just add it in there and like that's been such a game changer because I find that the hardest part is just starting so like thinking like okay how would I say this which way would this flow the best but with that I just have a starting place I can just like dump all my thoughts into it and it will fix it up for me so I'm not worrying now about like oh I have to do this I'll have to put time aside to do it I can just get started clean it up and I have something to work on.
Marty Kausas:
Yeah, 100%. It's actually incredible. We get this type of feedback all the time and obviously use this feature a lot. And for the audience, it sounds so simple that you almost gloss over what's actually happening. What Danielle just described is so simple and such a, if you've used ChatGPT ever, it's a very obvious use case. Like, hey, take an existing ticket that we already had, or a conversation that we've already had with a customer, and click a button to have AI take that and generalize it into an article draft that you can then edit and make into your own. So basically think like the same thing as copying a whole conversation that you've already had with the customer, put it into ChatGP, tell it to write a knowledge-based article, and then going from there. But that is operationalized now for you guys, right?
Danielle Murphy:
Yeah 100% and when I actually did an internal demo I went through the flow of someone reaching out and first the AI agent tries to answer it but it's something that we don't have a knowledge base article on so it gets routed to a support engineer then a support engineer gives the answer and What I had said in our demo was that normally that might have been where it ended. So the next person, if they run into the same error, they have to look through Slack, look through any internal like Notion docs that we may have. But now with AI, we can just click to generate an article. Boom, the article's done. Next time when a customer reaches out, the AI agent has that information to just give them an answer straight away. if they're running into the same error they ask a question straight away they have their solution they don't have to wait like maybe 20 minutes an hour for someone to be available they can just fix it and move on while either evaluating or just using their product and it makes it a lot more scalable especially for like a team size like ours it's just such a game changer to have something to do all for you
Marty Kausas:
That's awesome. In that demo that you showed to the team, the AI, I assume, was able to answer the question automatically the next time that same question was asked.
Zac Hodgkin:
Yeah, exactly.
Marty Kausas:
That's awesome. And I'm curious, maybe a question for all three of you at once. Do any of you have specific guidelines that you're trying to establish for like how articles should look or feel or tone or anything like that? And just maybe priority with you. Yeah.
Jess Herbert:
Yeah. Yeah. So we actually, this is our second version of our knowledge center. And our first version was, Um, there was no AI and it was all human input and it was a very specific template. Um, if anybody. Has seen the YouTube video of the exact directions challenge where the kids tried to tell their dad how to make a peanut butter and jelly sandwich. Um, and they don't include all the steps. So he does them literally. That was the basis that we used. And at the time I was like, if a, if a robot could do it, then that's how it should be written. and before AI was there. So that is coming back and is important because of our AI agents. And now we have a little bit looser template. It's a little bit more narrative. It's not that exact challenge. So the peanut butter and jelly sandwich would not be accurate, but it gives our users exactly what they need to know and in a confined way. We also did enable the pylon agent, which we found was answering pretty good most of the time, but we can definitely make our documentation and our knowledge better to make that agent even better. I think one thing, Danielle, you were talking about the article generation and how it takes that single article. One of the things that I'm super excited about is we also have this ability for an issue agent or an issue AI that will look past at our old responses across other tickets. So when we use that for a response and then create an article off of it, it's even more comprehensive using that generate article. So I'm super excited for that.
Marty Kausas:
So basically, you're using AI to answer a customer question based off of previous customer responses you've given. And then at the end of that, you're generating an article based off of what the AI drafted for you.
Jess Herbert:
Yeah, basically. Yeah, that's comprehensive all the way around. And it's super important, especially in healthcare, we have protected health information, like, we have to be very specific about what AI has access to what it doesn't, and how it's generated. So, again, these articles are just a starting place. And we still have to go through them with our same process to make sure they're accurate, there's no harm to patients, our users can, you know, continue on with their workflow in the day when they're seeing 20, 30, maybe more patients a day. So yeah, it's great to have that starting place to be able to have a lifting point.
Marty Kausas:
Yeah, and one thing I want to bring some attention to is, you know, you mentioned the issue copilot. So basically, hey, drafting responses to customers. We find that for B2B, that's just way more practical in a lot of cases, because you care about the accounts a lot more, you're higher touch, the accuracy is way more important. like if you have a million tickets per day, okay, like some people are going to get bad responses, that's okay. And frankly, the consumer questions are just much simpler. Our observation has been that if you're a consumer company, you probably have 80% of your questions that are pretty easily automatable with like the AI chatbot stuff that everyone's pushing and bring a ton of value there because it's easy to quickly deflect those questions. But in B2B, it's inverse. You have maybe 20% of your questions that are simple questions that could be actually just straight deflected based on content. And the rest are actually way more complicated. You have to go into a bunch of internal tools and pull data about the account and look at the history with them and what you've discussed before. And so Yeah, um, I guess Zach, um, coming back to you, I guess, how do you think about, um, like automate automated responses versus, um, kind of, uh, like co-piloty, uh, agent assist, um, functionality and, and how, um, yeah, how you think about that?
Zac Hodgkin:
Yeah. I mean, um, I'm, I'm pretty much like in the same boat from like what I've seen with, you know, an AI agent versus like an AI assist in messaging, you know, a lot of our customers, the questions that they ask are very custom, they're very specific. It's not super easy to just hit it with a KB article and nail it. Like the KB articles there to like assist. Um, But one of the things that I wanted, the thing that Jess called out about the assist bringing up a message that was used in a ticket and then a KB was created. Literally yesterday, one of our TSEs had shared that this pretty complicated question came in and they used the auto assist when generating their message. And it actually suggested a very complicated but very correct workflow from over a year ago from when that same TSE suggested it. And so I was able to solve it, done and dusted, and then also KB. So hearing her say that's the workflow is just so funny, because it's super true.
Marty Kausas:
And it's so cool. One thing philosophically that we believe internally is that, especially for B2B, you should be really, really sensitive about not cross-pollinating data between customers. And so, for example, if you do turn on an AI chatbot, In B2B you really want it just pulling from public data sources or things that like you've human in the loop reviewed and like it's out there, right? And anyone could see it versus when you do like the issue copilot, like drafting a response for you, you can use more data for those previous responses because you'll have eyes look over it and verify that nothing's going out. So that is pretty incredible. That's honestly amazing that that was something that happened. And yeah, I'm sure that's happened to us internally, but I have not heard that yet. Maybe we're just getting so used to these cool things that it's starting to get normalized really fast. But we do have these magical moments where it's like, oh, wow, that just works now. That's cool. So let's talk about what are things that you've tried that haven't worked? And I'll like kind of open this up to everyone. Are there any like workflows or AI things that like you were bullish on but like kind of weren't successful and any reasons why you think that? I'll kind of leave this open-ended for anyone.
Danielle Murphy:
Yeah, I'm happy to start with this one. So previously we were found that when we were trying to use an AI agent that it was exaggerating a lot. Now, as you said, like what B2B, a lot of the issues that we come across and the answer is it's not just copy and paste, like there's a lot of different moving parts. So it was exactly that for us where policies is a feature we have and it's done in a specific language. But the AI was given the answer for a different language, which would make the policy useless. So I was really afraid then of letting like an AI agent be customer facing, but the new workflow and one we use now with Pylon is that it's only using information from our knowledge base center. So then we know what it's saying is accurate. It's like relevant, it's updated. And if it's not confident on it, instead of trying to answer and just like ruining the trust with the customer, it will just let them know, like someone's going to answer this soon. And it's a lot more effective, but as well, because we have the AI agent, if someone's stuck, and it's something we have documented. Instead of them having to wait to get support from a human, they can get their issue resolved straight away if there is a knowledge base article.
Marty Kausas:
And going a little deeper into that as well, are you, when you have played around with agents, have you gated them for only segments of customers or have you segment like certain types of issues or have you, have you tried that or are you letting, or have you tried just letting it run for, for all like first, first passes?
Danielle Murphy:
Yeah, so the way we tried it previously was kind of all first passes, but it was just to the support engineer. So it was more like a co-pilot and it was like, this is what I would answer. And we were like, well, it's great that you didn't answer it with that because it's not accurate.
Jess Herbert:
We did, we did something similar and actually turned it on just for our Slack customers. Um, and it has like, as soon as the agent responds, it says, Hey, I'm your AI agent. Like, I'm going to be here. Don't worry. I'll, escalate to a human. And it was really well received, even when the answers weren't totally perfect. So our customers are very generous with their time and patience in relation to the AI agent.
Marty Kausas:
And one thing, Zach, I want to loop you in on something. So I think a lot of people in other industries, when they think about agents, they're making them try to impersonate humans, which I actually think does not work in support. People get really mad if you're like, hey, this is Anna from support. And it's like, you're tricking them, right? You're making it seem like there's someone who isn't. And so Zach, can you talk about like how you the profile that you created for Panther and yeah, what that Yeah, so I mean we
Zac Hodgkin:
We named it Peter Panther, and then we like did an AI generation for the image. And then in its canned messaging, you know, we obviously, you know, the thing that we wanted to call out, because, you know, our TSCs have really good relationships with their customers. We don't want, you know, an agent impacting the relationship our support team leaders have, you know, we called out the obvious of like, hey, I'm an agent and I'm not going to just like keep messaging you I'm going to like take a crack at this. And if it's not okay, like I'll get you over to like a smarter person that like can answer this for you or whatever. So like very obvious AI guy that is trying
Marty Kausas:
Yeah and we've seen that across the board. We actually initially for our own product when we like you're in the create agent flow we initially bought like this pack of you know human faces that you could like stick on to the AIs and we found no one was using them. People thought it was super creepy and it was just like not cool. And especially if you're trying to pretend to be a person and then you respond incorrectly, that just makes your company look so bad. Right. And so it's, it's better to just get ahead of it, even maybe say like AI beta and like that way, like it's like, okay, it's a beta, you guys are trying out, it's not the real thing. So highly, highly recommend when people do start trying to implement whatever conversational agents or AI chat bots that they don't try to fake a person. They try to make it clear that this is just a first pass attempt to help speed up the process.
Danielle Murphy:
We have similar one that it will say like, hey, I'm in training, I may not like always hit the mark, but I'm here to help. And it kind of reminds me, I think it was Duolingo that one of their notifications now after constantly trying to remind you to do your lessons is that, oh, these notifications don't seem to be working. I'll stop. And I think they'd said that with that notification, people kind of felt bad and was like, oh, no, no, it's not working. So they returned to their lessons. And it just makes me think that's quite similar when the AI bot's like, oh, I might not always get it right, but I'll try. It's nearly like you feel some sort of sympathy. And it's like, no, it's OK. You're doing your best. This is fine.
Jess Herbert:
saw the same thing we actually had some of our customers like saying good job to our bot which was which was fun to see.
Danielle Murphy:
I know I'm like a proud parent whenever I see it has answered it correctly and I'm like oh this is weird but I'm so proud.
Marty Kausas:
One of the things to point out as well is when we do think about knowledge management it's not only you know pre-AI it was the source of truth for self-serving your own support, right? So, hey, if you're Googling or you go to the help center and search something, you can answer your own question. It's even more important now because that knowledge is now powering everyone's, like, drafting capabilities. So if they want, like, the AI to take a first pass, it's powering, like, the conversational agents. So really, it's crazy. When people come to us and they're like, hey, I want to implement AI, we're like, okay, what is your knowledge base look like? That's like our first question. And if they're not, they don't have a knowledge base, it's like, okay, well, what's the AI going to learn from? Like, where, where's the data going to pull from? And so the everything actually has to start with that knowledge base being really clean. And so anything you can do to just make it more efficient, that process of identifying what to write and actually writing the articles with AI article generation, as we've discussed, ends up being super important. I guess moving to upstream in that flow of identifying what to write, has anyone tried anything to help them figure out what content is missing or how to even think about what to prioritize?
Jess Herbert:
I'll start on that one. There's actually in the chat right now, there was a great question about exactly that. And I think all of us who use pylon are super lucky to have the AI option of knowledge gaps. So it looks back at our tickets, and it can say, you know, if a single ticket needed a new article, it can also find trends and certain images or ideas that need it. And then you can, again, generate an article and it will show you the 2010 tickets that are associated to it and use all of that data to generate the article. Before that, it was, oh, we realized we don't have the article when somebody asked the question and it goes into a backlog. And it waits until you can update it on your own. So I think those two of the gaps and the articles that are needed specific to a ticket really accelerate exactly what we need to do.
Marty Kausas:
Yeah. Anyone else? Danielle, have you thought about gaps or implemented those or planning to?
Danielle Murphy:
Yeah, so one way this was actually today and it was more for future proofing. We changed basically something in a module and any of our customers currently using it. We're going to need to update it so we had this whole conversation about like why they need to update it and what steps they need to take and stuff like that. So from that I could just generate an article that was specific for people upgrading from the previous one to the new one rather than setting up. I could also use the AI agent to say, okay, what customers have talked about this that may be using it. And then again, with pylon, I could just send a broadcast to all those customers specifically using it to be like, Hey, we see you're using this. Um, here's an article for how to upgrade it. And like, it's so nice when all the different features come together. And I think definitely for support, being proactive like that, where it's like, hey, this might, this is going to break in the future because of some changes, but this is what you can do now to get ahead of it. And this is a knowledge base articles specific to you. So they're not having to kind of like read around it and guess which parts are and aren't. It's just straight away there for them. And they don't run into an issue further down the line where they just get frustrated.
Marty Kausas:
Got it. And Zach, what about you? Process before and like, or I guess current and where you're going, has there been like, how do you guys identify what to write content for and how are you thinking about that moving forward?
Zac Hodgkin:
Yeah. I mean, for every, you know, support interaction, our TSC is trying to like follow, you know, understanding the customer issue, you know, using their resources, you know, search the KB, search Docs, search Slack, like ask someone else, ask engineering, you know, when they find that answer, you know, if it's outside of those resources, you know, create a new resource and use that to share with the customer. So we try to like, try to shorten the gap by incorporating that that process in the actual ticketing process. But, you know, like, like, you know, we've talked about already that there are things that we miss, there are things that aren't there. And I think You know, pylon has a really good way at identifying potential, you know, knowledge base articles are like missing gaps and like another another thing that falls into this that which actually isn't like a knowledge gap thing. It's the opposite problem, but One of the cool things that pylon can do.
Marty Kausas:
I know we're all just like pylon does these things, but I'm sorry, by the way, I like, it's just, it's when we talk about what's next, it's platform stuff, right? It's the tools.
Zac Hodgkin:
Yeah, it is. But I mean, like, pylon does do these things. things and they work really well. So we're going to talk about them. And so like one of the things that you can do is like when you're generating an article, you have the ability to see if there's potential duplicate articles. And so it helps you maintain the health of your knowledge base. You don't just have like a bunch of duplicate articles, which is like in the past, like something miserable to sort through, right? Like knowledge-based cleanup and maintenance is like a huge piece of it. and is kind of a nightmare unless you stay on top of it.
Marty Kausas:
Yeah. I'll also say, I'm sure everyone's had this experience before, both on the panel and listening. We hear the Franken desk story all the time. So you come into a new team, new company, and you have no context on everything that's been written. It's literally just a crazy hodgepodge of like potentially hundreds of articles, I think that's like a very obvious place where it could help, right? And just like, hey, like you, you're about to write an article, something already exists just like it, like, it just makes sense, right? It's one of those things where like, you weren't even doing it before manually, or maybe you would try, but it's just so hard to do manually and so like frustrating and not useful that, yeah. And Jess, I know you had something to say too.
Jess Herbert:
Yeah, the other side of that, too, is like I said, we're working on B2 of our Knowledge Center. So we have all of our old documents as we're rewriting them. And the duplicate article notification has come in really handy a couple times just to stop our customers from having confusion of like, oh, do I do it this way or this way? So yeah, it's fantastic.
Marty Kausas:
Awesome, okay, so we're about at that time where we should move to Q&A, and we have a bunch of questions. Just, I'm gonna summarize some of what we've discussed. So, and actually, sorry, there's one more question, one really important question. And if all of you could kind of just share quickly, how has the team received using the AI stuff? Like, is it, because some people are concerned, oh, this is going to make my life, or I don't know, it's like, fearing my job, or maybe it's going to make my job more robotic because I'm just pressing a bunch of buttons. Yeah, give honest reception on how it's been for the team internally.
Danielle Murphy:
For us, it has went down really well, especially because I think the AI agent and the knowledge base articles, they can help a lot with the repetitive stuff, but maybe not as much with the issues that take like a lot of investigation and troubleshooting. And for us as engineers, that's kind of the more fun issues. But a lot of times you get stuck with the repetitive issues on like, Explaining something or sending document links that you don't have as much time to deep dive on these issues that like you're excited to investigate So that's one way that we're looking at it. It's like it's taken away like of the grunt work and then we're available to like deep dive and troubleshoot the issues that we're actually excited about and that like you've no idea what could be there's a lot of different moving parts and so for like my support team we're very excited about that but for other teams such as like the product team like I'm very lucky in Spacelift that our product side knows there's a lot of value in our customer interactions But previously, they'd ask me, oh, what customers should we reach out to to ask about this feature we want to release? And I just have to take some time to think, OK, who have I had conversations with around this? Search Slack. But now with the Ask AI feature we have on our issues, that can just search and do it for me, which makes it a lot more accessible for me and for them. So we can grow a lot more of a customer-focused product. and like customer focus based on like real information and real conversations that we've had just in like the matter of minutes.
Jess Herbert:
Yeah, I totally agree with that. And our team has embraced all of the tools that we've been given. We were probably doing it ad hoc as before with chat GPT or whatever other tool we had. So having it all in one place and it goes beyond knowledge management. Like you said, Danielle, it's, it's the, you know, searching across all the tickets and not having to remember every single detail and finding things easy. That really gives us that little bit of edge to have some more time to do the fun things.
Marty Kausas:
Zach, any quick, yeah?
Zac Hodgkin:
No, same boat. The TLCs love it.
Marty Kausas:
Awesome. Cool. Okay. So let's hop into the Q&A section and we'll just start from the top. So, and by the way, if you don't see where the Q&A is on the right, there's like that sidebar and chat, I think is the default, but you can go to the Q&A is the third tab. And so first one. Okay. So Ravi, after an article gets generated from a customer conversation, do you have an approval mechanism before publishing? And if you're a big company, how do you manage all these generated articles? I think Zack, you're probably a good person on, you mentioned approval workflows already. Do you want to take this one?
Zac Hodgkin:
Yeah. And it's basically, you know, for us is what I mentioned earlier, you know, for specific people who have, you know, vetted content that has been solid for an extended amount of time, they can just publish things without having an approval product, without having to get someone's review. But for our TSEs that are more junior, we have an approval process where basically they ask for someone's review, they review it, and do a little back and forth if they need some more information. And then once it's up to snuff, we publish it that way. And so that's how we do it here at Panther.
Marty Kausas:
And just tactically, can you describe how does someone ask for review?
Zac Hodgkin:
Yeah, that's a good question. And so currently how we're doing it in Pylon is we use comments. And so we'll tag someone else and be like, hey, this is ready for review. And then the reviewer will be like, hey, it looks like blah, blah, blah, or is missing whatever. Can you check on this? The other person's like, all right, I did it. And then we're like, all right, good to go. Publish it. But what we love feature requests is having like an actual approval style and like in a queue within the system, because that's like, that's what we've done in the past with other knowledge base platforms. And so we're just waiting for pylon to give us that that type of workflow.
Marty Kausas:
Totally. As you were describing that, I was in pain listening to the comment workflow, which for anyone who's listening, basically imagine Google Docs style. You highlight something and tag someone, and then that'll DM them in Slack, hey, whatever message or comment that was left. So yeah.
Jess Herbert:
I mean, it's that same workflow, and it's a little painful, but it's not horrible.
Marty Kausas:
This is how you know this is the truth. The truth is coming out here. So it's not all AI magic. Okay, cool. Let's go to Jackie's question. So how to balance the use of AI and loss of hands-on skills, for example, using AI to create content versus writing the content yourself? Danielle, I feel like you might have an opinion here.
Danielle Murphy:
Yeah, definitely. So I hate writing content. And I think part of it is like a lot of our customers are based in America. And I realized the past few years, there's a big difference between like what I speak, which is kind of Irish English and American English. And I'll write an article previously or even a Slack message and I'll think it's fine. And I'll send it to someone and they'll be like, what does that mean? And I'll realize, oh, okay, that must be Irish English, but with AI, it kind of does it for me in American English and I think actually improves my skills more so than me losing them because I'm still reviewing the articles, I'm still making any changes but I'll read something and I'm like okay yeah saying it that way or like this style of flow actually works a lot better than what I would have had so yeah I definitely think it's actually improving my skills each time more so than losing them.
Marty Kausas:
And maybe the heart of the question might also be around like, uh, losing, uh, I guess, uh, let me see, I guess. Yeah. Hands-on skills. Okay. Yeah. It makes sense. Um, and I guess, does it feel like you need to know less about the topic, the article is being generated into, or do you still need to maintain the same level of like knowledge around it?
Danielle Murphy:
I'd say definitely for now we're still maintaining the same level of knowledge because anything that is in the article like we should make sure like that is the truth before we publish it but it's also being helpful if it adds something extra and we're like oh where did that come from and then we can see it was like referenced in this issue we can go look at that issue I'm like okay I didn't know like that feature also had this little extra part so again it's just improving it that way that like not only you're having to go from scratch and kind of okay this is all I know but I hope that's it whereas with the AI it has all the information available that you can then go through and be like okay maybe that's not 100% applicable like we kind of went a bit off tangent with that conversation but this is and I didn't actually know that so you're kind of constantly learning from it as well.
Marty Kausas:
Yeah, and one thing to point out is that if you're doing AI article generation, that's coming from responses that you've given to customers. So you basically have to already have had the background and information to know what to say, or someone on your team at least had to. And then really, the article generation is just taking work you've already done and just really putting it in a different format. that is just like more generalized. So maybe that's like another way to think about it. Jackie, another question for you. What ethical considerations should we keep in mind when using AI in the knowledge management space? I will open that up for whoever wants it.
Jess Herbert:
I think for us, it's ethical, yes. We also, like I said before, we have protected health information that is in our tickets that you know, we have to make sure that that is filtered out and is not included in anything that could be shared across customers or even publicly. So I think that's one of the biggest considerations we take, whether that's ethical or not. I'm not sure if that aligns.
Marty Kausas:
I think that counts. Daniel, okay, question directly for you. What are the sources of knowledge-based gaps and how do you figure out what topics you need to create an article? I think maybe you already covered this, but if you want to resummarize.
Danielle Murphy:
Yeah definitely so right now the way our flow works is that as we're closing out articles we'll either mark them as they need an article or maybe it's an article that needs to be updated or already is an article so that way we're kind of not missing anything obviously the downside of that is it's just as they're coming and in another way that I know now pylon has knowledge gaps that will kind of tell us what's happening so we don't need to worry about that as much but another way that we did it was with AI again it could automatically tag issues that maybe we had a lot of features but like user or a lot of issues around the user management feature so we could have our dashboard with which features were the highest case drivers. So user management, for example, we could click in that and be like, okay, we definitely need some articles around this, this issue comes up a lot, let's focus on that. So at the start, it was kind of looking at where are case drivers and what could we get articles out to reduce the amount of tickets that we were getting. And now it's just closing them as they come and making sure there's an article there. So we're constantly scaling up as it comes and it just makes our life a lot easier.
Marty Kausas:
Awesome. Okay. I know we're right at the end here. So, I want to be conscious of time. I really want to thank everyone for, yeah, panelists, Jess, Danielle, Zach, thank you so much for being on the panel. This has been great. Thank you for also giving Pylon some shoutouts. I really appreciate that. And yeah, for everyone, we'll send out a recording of this afterwards. Also, I know, like, as we were talking about this some of it is kind of unclear unless you can visually see some of like what was discussed like it's kind of crazy to be like okay when you say AI article generation do you actually mean AI writes the article or like how does that actually work in practice we'll send just like a recording kind of just explaining like how how you can run some of those workflows as well so Yeah, thank you, everyone, so much for coming. Thank you to the panelists. Really appreciate your time and sharing your wisdom. You are trailblazers here, obviously. And clearly, a lot of people want to hear from you. So yeah, appreciate your time. And thank you to everyone who joined. And shout out to Richard, support driven team, for making this happen. So thank you very much, everyone.
Danielle Murphy:
Thanks, everyone. Thank you.
View All Transcribation
This block contains a lot of text. Navigate carefully!
More case studies
Get started today
We'll walk you through how you can get started and provide recommendations on how to scale your team and setup.
Book a demo
Book a demo
Book a demo
Book a demo
Book a demo