Planful Predict: Your Virtual Sidekick | Justin Merritt & Bikramaditya Singhal

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Planful Predict: Your Virtual Sidekick | Justin Merritt & Bikramaditya Singhal. The summary for this episode is: <p>Planful Predict brings AI-driven superpowers to your planning and analysis yet leaves you always in control. Join Planful’s Justin Merritt, Senior Director, Solutions Consulting, and Bikramaditya Singhal, Principal Product Manager, as they demonstrate Planful’s native predictive intelligence and how it helps you make confident, intelligent, human-led decisions with greater agility and accuracy. The team will show how Planful Predict projects baseline revenue and spend to accelerate your budgeting and planning efforts and proactively prevents and identifies potential data errors. You’ll even learn how Planful Predict can help validate actuals during your close and consolidation process.&nbsp;</p>
The pains of making mistakes and errors
01:15 MIN
Use cases with Planful Predict
00:59 MIN
Predict: Projections
00:47 MIN
Predict: Signals
00:35 MIN
Predict: Signals - Demo
07:06 MIN
Predict: Projections - Demo
01:59 MIN
Planful Predict AI Engine
01:57 MIN

Genevieve: Well, hello, we're going to get started. My name is Genevieve. I am a product marketing manager here at Planful. Just a quick reminder. We're going to be talking about Planful Predict, which is super exciting, but this session will be recorded just like all the other sessions. So if you miss something, if you want to refer back to something, you will have access, not right away, give us a few weeks to process all of this stuff, but you will have access and you'll be able to watch everything. Especially, I know there are so many sessions going on at the same time, and you're scared that you're going to miss a session. You will not miss anything. Do not worry. But with that being said, I'm going to present Bikram. Are we going to applaud for Bikram? Are we going to change it up? And Justin. Yeah. Okay. Let's get started.

Justin: All right. Yeah. You keep your mic.

Genevieve: I keep mine?

Justin: Yeah.

Genevieve: Okay.

Justin: All right, that's it. Session's over. Yeah. Awesome. And I think a lot of you guys were in the last session that was in this room, too, but what we'll do, we'll do some intros, I'll walk you through the concept of Predict and what we're trying to do. And then Bikram, he's actually the lead product architect for Predict. So when we talk about how the actual data science works and everything, Bikram and I have been working together for a couple years to bring this to you. So hopefully you guys are excited as we are about it. And then hopefully we can get all your questions answered. So we really want this session to be more kind of open forum/ open dialogue. So we're going to spend a little bit of time up front, just doing a little bit of review. We'll just do a quick," Here's where things are at in the product." And then Bikram will actually start to talk about how the product works. So how did we actually architect this? Why is it different than anything else that's on the market? Why does it work better than anything else that's on the market? And then really, we should have plenty of time to answer questions. So if that works for you guys, and then we've got some people to pass mics around, so should be able to get all your questions answered, hopefully, in the session today. But does that sound good? I see some heads nodding. Yeah. People are excited. Yeah. We're the last thing between you and lunch, so yeah. Let's do it. So you know who we are now, right? So this one, I really like this, because I think if we polled the room, this comes from actual market research and talking to customers, talking to analysts. And I think if you look at this and if we polled the room, we'd probably get the same results. So when we ask people, even long time Planful host analytics users, what do you want to do less of, and these are the things that people talk about. They feel buried still when they're they're working. They feel like they still have to double check data. They still do a lot of checking the numbers, making sure they're accurate, making sure there aren't formula errors. They also sometimes feel stuck, like they just are in this rut, they're very tactical manual. They can't get into that value added analysis work. And then Grant said something like," You don't usually go to college for finance and accounting to come out and then be checking spreadsheets." You have this idea that you're going to do stuff that's more impactful for the business. And so when you ask people, we're still stuck in that world. That's still a lot of what we do day to day as finance and accounting professionals. And then obviously worrying. You're just worrying about errors. I'm sure all of us in this room have countless stories of mistakes we've found and errors that were made, whether it was with Planful or without Planful. And that just happens because of all those touch points. If you saw the product keynote and you think about those millions and millions of data points, there's just all this manual touching that happens in the process. And it just naturally lends itself to unfortunately creating the potential for error. And so then we ask the question, if we could just, I use the word, my team laughs at me, but I say" automagically," right." If we could automagically make that go away, what would you want to spend more time on?" And people resoundingly, can I say these three things? They want to do deeper insight, so they want to spend time actually doing the analysis. They want to work and think more strategically. And they really want to reduce that costly risk that exists in the plan. And so that's the themes. I will ask you guys, would you say the same? I've seen some head nodding. Is this fairly accurate? Okay. All right, cool. So that's where this whole idea of Planful Predict really comes from, is how do we start to actually attack that? How do we start to reduce the less so that you guys can do the more? And so with Predict, the first intro for those who have been customers for a while, or are plugged into what we're doing, we actually talked to the customers and they said," The first thing we want is we want something to help us detect errors in the data." So what Bikram and I worked on is this idea of signals. We want to take the data, those millions and millions of data points, and be able to quickly analyze and surface and show where there's these potential errors. And then naturally lends to the second stage of that, which is projecting the financials. So having that ability to actually help you plan for the numbers. And what's cool about this is when we think about using it in the platform, it's not just on plan data. We can use it on actuals. We can start to extend it out to things like dynamic planning and all those pieces. So when you think about the vision of this, think of it as truly this virtual assistant that's really spread across the entire product. So some of the key use cases, so again, from market research market data, and what we really try to focus on at first here is forecast detection. So really looking at the forecast, finding those errors, finding those challenges, problems, moving into actual detection. So when you think about your actuals, I spent a lot of time working on closed cycles, closing the books, publicly traded company. If you guys work at a public company or have, it's a very stressful process, and then there's this even more stressful thing when you do the analyst meetings. You got to prep the CFO and you got to prep the CEO and get ready. And so being able to have this level of oversight on actuals is, to me, really crucially important. So we wanted to make sure that it could work for that, as well. And then, obviously, reporting analysis and data validation. So having it seamlessly work across all of your different reports, all your different analysis you do in the product. And then finally, just that checking of the errors. Having that helpful person or virtual person to check those errors. So with that, then, you get really the two products that we have available today. So they're all part of the same product. Predict Projections. So again, really looking at that intelligent forecast, giving you the trusted insights on your data. So whether it's a forecast that you came up with being able to look at it and say," Okay, what would the virtual financial analysts say, who has no bias, has no thought, is just looking at data. What would that analyst say, versus what I say based on my implicit biases that exist, based on the things that I'm unconsciously building into the forecast, because, oh, I've been burned before." I'll pick on Kyle because I see him." Because Kyle overspent on the partner dinner budget, so I got to put more money in there." So how do we start to attack those things? And then how do we start to have that collaboration? And that's really key, as you probably picked up on, better together, we want to collaborate. And then with signals, using that to do detection. So having that ability to quickly detect. So this is a big part of it. We didn't want to have something where you got to push extra buttons. It should just work. It should just tell you," Hey, there's something wrong." And that's really important. A lot of the other products on the market require you to go out, and you got to do all these extra processes. You got to send your data out to some third party service, and then get the results back, and then consume it in some other way. So I want to be able to do it natively in the product to when you're working with the product, it just works. And then monitoring and validating. So I'll show you just a little bit here in the product. Just a recap. You saw some of this in the product keynote. Hopefully, if you're in this previous session, I think Andrew showed a little bit of it as well. But really what we want to get into is giving you that glimpse, and then we'll talk about the data sciences actually behind it and why this works, and then give you guys the opportunity to ask questions. And we'll try to answer any of those questions you have. Cool. All right. So let me do this. All right. So first of all, I always equate this back to when I was managing the plan of this large enterprise organization, and I had all these different people working on my team, collecting budgets, forecast, we have business users, people on the manufacturing floors, global organization, lots of people touching the plan. And so at the end of the day, all that gets rolled up. And then I'd have all these creative ideas about how I could look at and validate numbers. And for me, I'm a visual person. So using something like a Dashboard is one of the ways I would do it, to compare last plan to current plan, those sorts of things. The other thing I would do, which is really horrible, and I'm really surprised this guy, Jamie, he still comes and works for me because he worked for me all those years ago. And I make people print out these binders of all the submissions. I see some people laughing. That's literally what I would do. I'd say," You have to print out every submission and three- hole- punch it, put it a binder." And I sit on the couch, and I go through, and I'd highlight stuff, and put post- its, and somebody would scan it and then send that stuff back. So you can imagine a little older, we didn't have iPads to do that kind of stuff with. But then you get that feedback and you try to adjust the plan. And so that process is terrible. And so how do we start to identify those errors? How do we use signals? How do we make the data just better so we inherently trust the information more? So we don't have all that wasted effort in that process. And so that's where with Signals, and I'm going to start here just because the audience, most of you guys, are probably budget owners, right? So with Signals, what we're doing is that exact same process that I described. We're just doing it automatically. So as the data is coming into the application, whether it's actuals, whether it's plan, whether it's a new scenario you've created to go model out an acquisition or something. The system's always looking over the numbers and going to show where those potential errors are. So with Signal's overview here, this is available for any user in the application can access this based on their security, their permission. But really for the budget manager, you can come in and really start to understand where there's potential risk in the plan. We can, of course, use the dimensionality to filter this and go through it. But also, we can start to dive in to these different pieces of risk. So again, graphically, what we're doing here is showing you the output of the AI engine. That's that gray range. Who in here has taken a statistics class? Okay. Who took a statistics class in the last three years? Okay. Come on. All right. So this is how I explain it. Bikram, I think, hates this because I have to dumb it down because it's really complicated. But when you think about statistics, think about if you remember, you got the mean, and then you got standard deviations off of the mean, right? So in a really highly simplified way, that's what we're doing here. We're doing this analysis, we're producing a projected result or Predicted result for a specific set of dimension combinations or accounts. And then what we're doing is we're showing you graphically where the values that you have compare against that. And then we're assessing a risk rating based on how far away or how far that spread is to let you know," Hey, there's a potential problem." So here where this red dot is, it's saying" That's high risk," because it's saying that's a significantly different value than what we have traditionally seen in this account, in your business. And there's really no reason, again, from a math perspective, there's no reason for that to be there. And so that's where we start to get into the collaboration of, we'd like to collect and have things like the comment for that, or some backup documentation, or you might want to go ask somebody why do you need the extra money in this account? And so that's the idea with Signals, is it's doing the review process for you so you're getting directly to that point where you start to interact with the people who actually own the numbers. And then importantly, we wanted you to be able to do that interaction right from here. So if I right click, I can access commentary. So those of you who used comments or dynamic collaboration through Reports, it's that same functionality. If I do a comment here, we can see it in Reports. We can see it across the application. We also wanted you to be able to go directly to the source. So if I drill just like from a dashboard or a report, I'm going to get down to where that information came from. So if I'm looking at actuals, I can see the GL load or transaction data if you're loading that. If I'm looking at plan information, I can see the specific plan that this came from, even click back using that new link functionality to get back to where it came from in the application, the specific template, adjust it, move on. The other thing, and we mentioned this briefly, but you'll notice this history toggle. So some of you'll see this in your application. For others of you, this is coming soon as we roll out IV. So you've heard us talk about that. So there's a lot of side benefits to what IV provides in the product. So it supercharges the entire engine, makes everything faster, but it also gives us the ability to have a dynamic history view. So I can see with the click of a button on any value, all the iterative changes that have happened over time, the date, the time the user, the DLR, if it's coming from a data load on actuals. And there's some really cool stuff that you're going to see over the next couple months. So make sure you pay attention to the release notes. Because there's some really, really neat stuff that's going to come out to continue to make this better and better for you guys. All right. So Signals then in Reports, we can activate that across any report. So let me just open up. Let's see. I do like this one. So when you have signals on, then, want to be able to use that on any report. So you'll see this little Show Signals toggle up at the top. You can just click it and then it'll highlight the values. Hopefully yours doesn't look like this. This is made up demo data. So purposefully, there's a lot of signals in here, but again, now we can start to use the dimensionality. And you'll notice I have like parent members, parent rollups in here. So we can do these signals across any different combination. So think about the power of that, right? We're not just looking at the bottom level account airfare for this department or whatever. We can look at it more holistically. Because often, I'm sure for a lot of your guys' businesses, you have a bunch of departments they roll up into a total number. You may not care about the give and take between the departments as much as you care is the total number okay? So you can actually do that with Signals because Signals will look at each level of that data. So you may have something in here where travel airfare only has a couple signals, but if you go down underneath that into the different departments, you may have less or more, right? So it gives you that flexibility to look at those different slices that are important, and assess the signals as you would assess them manually today without any help. So you can do that same exact work that you're doing today, you just can do it automatically. So it's going to tell you where to look versus you having to try to find it. So then Projections just extends this concept into templates. So if I go into the operating expense template here, then we have our different rows and then again, I can access it through the right click menu. You can also see up at the top, we have Predict Projections up here. Before I do anything, I'm going to go to this Advanced Fill. So again, just like with the signal context that we looked at, when we're reviewing the data, when we're actually creating the plan, we want to be able to give that same contextual information. We know there's a trust issue, right? We're finance/ accounting people. We know better than anybody what these numbers should be. So we got to show why the numbers are what they are. And so this helps, especially, if you think about getting into a distributed planning process, this is going to really help those people who aren't finance people, who don't have that acumen, because they can now see and understand why is the system telling me this should be the value? And they can see it both from a projected value here in the gray region. You can also see what you have in your plan versus what the engine is telling you. And then you can easily toggle on any prior history into that chart. So you can start to understand why the shape of the predicted range is what it is, right? You don't have to be a data scientist to understand this. You don't have to be a crazy statistician like Bikram. You can look at this and you can say," Oh yeah, that actually, it does make a lot of sense." And the cool part about this, is the engine is always running. So unlike other AI technology where it's like a one time, train the model, then it runs. This is always running. So your business changes, you get three months of actuals in, maybe something starts to shift in your business. The engine knows that. It can detect that, and it can adjust that future projection based on that. And so that's a lot of the power here, as well. This isn't a static component. This is a real live, living, breathing thing that now exists in the product and is always helping get you to that better answer. I'm going to turn over to Bikram and put some slides back on. We're going to explain a little bit of the magic sauce behind this, because it is magic, for real. It is proprietary. We're working on patenting it. It's very different. Literally, you cannot get this in any other product. And so we thought it'd be really cool for Bikram to actually explain how we came up with this concept and how it works. And then we'll have that full Q&A for you guys. There you go.

Bikram: This one?

Justin: The right. Yeah.

Bikram: Perfect. Here comes the fun part. AI for finance, it's unique. It's hard. And AI for FP& A in general is even more harder, but let me first hear from you. You have any thoughts? Because we have been the domain experts here, many of us. We own the financial planning. We have little bit of idea on data science, machine learning. Some of you have more idea. Some of you have little idea. But nonetheless, why is it hard? Why do you not see very many products, data science machine learning products, in the market, in the FP& A space? Any thoughts?

Speaker 4: You need a lot of data. It needs to be very clean and regular, like follow the same patterns.

Bikram: Excellent. So you need a lot of data, but do you not have a lot of data? Do you not say that I have five years of data? I have 10 years of data. Is it not enough? Maybe. Subjective, right? Yeah. But why? If you say I have five years of data, then why is it not enough? The reason is most of the time, the granularity of the data that you're dealing with is at a monthly level, and five years of data, meaning 60 data points. So 60 data points is obviously not enough, but when you say five years of data, yeah. It's five years.

Speaker 5: There's also a lot of variables that are not in your control outside, forcing on industries, depending on, is it a new store, old store? There's a lot of factors that then make it near impossible to predict.

Bikram: Excellent. So gentleman here. There are a lot of variables beyond your historical actuals that are important. And that also applies to other business cases. Look at recommended systems, look at any machine learning use case. There will always be something. And that's why your accuracy is never a hundred percent. You can never be 100% accurate because there are other variables that are playing some role, and that you don't know what they are. At times you know, but you don't have control on them. And similarly, in the finance space, in the FP&A space, you have internal variables, your business decisions, right? Your strategy. There are external factors, the inflation, the war that's going on. So there are so many factors that are actually influencing your budget, the company's finance, but there's always the case. So now coming back insufficient data. So we have a very good understanding now why it's always insufficient data. Because even if you say five years of data, the data is not enough. You say 10 years of data, it's better, but it's not enough. So insufficient data has been a major problem for anybody in the FP& A space who wants to do machine learning, data science. We agree? Okay. One size does not fit all. Now, what do I mean by that? Anybody?

Speaker 6: Probably the things that inaudible industry to industry, just company to company in your business processes are different from company, maybe location to location.

Bikram: Perfect

Speaker 6: And the size of the company-

Bikram: Size of the company, also. There are like millions of regions why a model that's built for your company does not apply to his company, right? But I'm not even talking about that. That's a fact. We'll agree to that. One size does not fit all. But I'm talking about the various GL combinations that you have within your own company. Yeah. Within your own company, a model build for a certain GL combination does not apply to the other GL combination. And that makes it hard, very, very hard. So how many GL combinations do you have? 100,000? 500, 000? Or a million? Depending on the size of the company you are. Now hundreds of thousands of GL combinations that we are talking about, how can you build a model that works all across? You can't. Maybe you can, if you're a Planful Hard to productize. Now, why is it hard to productize? Because if it is not productized, consider because we know data science machine learning, we have dealt with it. We have worked with it. We have existed with data science machine learning for quite some time now. But more or less in the industry, you would see it as more like a services assignment. Whether internally in a captive capacity, or you have somebody working for your company or in the purely services capacity, it is always like a services model. So you have data scientists involved who will take a lot of time understanding your requirement. They would spend a lot of time understanding a business problem. They'll build a model. They'll verify. And then finally, they'll deploy. And then they'll teach you how to consume that information because that's again another cycle, right? So there is so much that goes on to it. So that's why it is hard to productized. And not just that every month you are getting new, new actuals. So what happens in data science machine learning AI in general is that, once you build the model, you have to maintain it. It goes through a maintenance process. It's not always current, it's not always updated. It has to learn from the new actuals that are hitting your system that you're capturing in your product. So then how do you manage? Now, we spoke about hundreds of thousands of GL combinations, which means hundreds of thousands of models. And maintaining them. How often? Every month, every quarter, at the very least. Yes? So that you get a real feel of rolling forecast. Yes? Now that's why it is hard. Are we all convinced? Yes? Okay. So this is a high level architecture of the Predictor engine, where on the left, you have data. Now this data is a GL combination. What we see in this architecture here, so it applies to every GL combination. So the entirety of it, we call it machine learning pipeline. This is a standard terminology where data comes, there is this ETL process that massages the data, data cleansing, data preparations, and stuff like that. So we are no different there. And then comes the core AI engine part, where in our product, many of you might have that question. What is it that you are using in the product? What algorithms that are using in the product? So we definitely are using a lot of conventional machine learning algorithms that everybody else uses, but how we are different is how we have implemented it. It's a proprietary implementation that we have at Planful that I'm not going to talk about today. But we have proprietary implementation and a bunch of conventional machine learning algorithms, and little bit of proprietary components there, as well. And everything put together, that's the algorithmic part. And then conventional machine learning, time series techniques, again, the conventional stuff. So we use all these algorithms, and then we have little bit of proprietary components there. And then there is this complex stacking where you consume the output of so many algorithms and then produce something that's more reliable. And then there is this servicing layer, and that can be service to signals and projections. So I'll pause here for a moment to see if you have any question. Yes.

Speaker 7: Is the model trained only on that company's data, or is it a wider pool of data that it's trained on?

Bikram: That is something that we cannot do. You know, we cannot do a federated learning sort of a thing where your company's data is clubbed with other company's data, is clubbed with other company's data, then we are doing great as a company. No. This is company specific. This is GL combination specific. Your one GL combination is not used in other GL combination. Your left hand does not know what the right hand has to say. Yeah.

Speaker 8: My question is around, will this be able to use in conjunction with say the SIM engine so that you can run the SIM engine and it'll open up each template and do the automatic projection and then save the template? So say if the company has like a hundred profit centers, it'll automatically just go through them all, do the Predict production, and then automatically save them?

Bikram: Oh, say that again. I'm sorry.

Speaker 8: So I'm asking if this will work in conjunction with a simulation engine where it'll open each template and save it.

Justin: Like scenario seeding, Bikram.

Bikram: Ah, scenario. Okay.

Justin: Yeah.

Bikram: You want to go with that?

Justin: Yeah. So that's, that's exactly where we're headed. So in the next couple of releases, those are some of the improvements or enhancements that are coming out. So the foundation is there, it works in the templates, works in the user experience. We've got a lot of good data, a lot of good customer experience using this. And then now, what we're doing is starting to embed it into things, like not in the SIM engine directly, but actually into the scenario process. So one of the big asks that we've gotten first from customers who've used this is," Hey, we want to create a scenario and we want to be able to pick the specific business units or entities, we want to pick the specific accounts that we want them just populated with data across the entire planning period." And so we're adding that into the scenario setup area so that when you go create a scenario, you have the ability to say," I want to create this with Predict." And then where do you want Predict to actually pre- populate the numbers for people?

Speaker 9: We are a company that acquires other companies. So at times, we may go back to having maybe just a year's worth of data. What's the minimum amount of data that we should have before we say," Hey, let's run something like this."

Bikram: Yeah. That's an excellent question. So you see, the Predict AI engine, like any other engine, is based on the data. You definitely have to feed in enough historical actuals for it to behave sanely. For it to produce reliable output. So minimum three years of data is definitely required. More the merrier. But yeah, three years is definitely needed. And if three years of data is something that you don't have, for example, where there are situations in terms of mergers and acquisitions where everything changes overnight, maybe if you do like twice and then merge the outputs, maybe there's some value, but otherwise it's difficult.

Justin: I just want to add onto that. So that's one of my favorite questions because we have a lot of customers who are high growth and going through acquisitions and those things. And so one of the things I always say is like when you go through due diligence process, unless you're acquiring a company that's brand new, the data's there. And so if we want to use this type of technology, then what we're doing is we're saying," Hey, use things like Org By Period," which a lot of people don't even know is in the product, but you could actually bring in that data from the company that you're targeting to acquire, you can use translation to bring that trial balance data in, going as far back as you can get during your due diligence. Now you have the ability to use AI against it. And then with Org By Period, you can set the acquisition date so that automatically the system, when you're running your reports, will either include or exclude that acquired company data. It'll also automatically create the performance for you. So there's some really nice synergies that you can get, just requires you to think a little bit differently about how you complete that acquisition in terms of bringing the data into the Planful product.

Speaker 9: Awesome.

Speaker 10: So how does the algorithm handle anomalies? For instance, Q2 2020. The numbers that are generated from that quarter are not going to be real helpful in Predicting Q2 2023. So, because that anomaly exists in my data, when I look at it going forward, I have to currently under our current forecasting process, I have to remove anomalies. So what do you do when you see these outliers that occur?

Bikram: Excellent. So the first question, the prequel to that is, do you even treat anomalies? And then the question is, how do you do it? So we definitely do because garbage in and garbage out, right? So if you don't treat the data properly, then it's definitely not going to produce reliable output. So we definitely do data treatment. We definitely remove the anomalies that might have been present in your historical actuals. We definitely do that. But how do we do it? Again, it goes through the same process and there is a bunch of machine learning algorithms. And then primarily in the anomaly removal part, there is little bit of a machine learning component, and there is a little bit of a statistical component to get rid of those anomalies. And then just to reemphasize on one of the simplified versions of Justin's explanation about, for example, the distributions. And we talk about data distribution, mean, and standard deviations, etc. So it is in the real world. You see, that theory is a great theory, right? I mean, it's a great mathematics, but it doesn't work. The reason is your data has to be normally distributed for something like that to apply, but it's more or less never the case. If your data is not normally distributed, I'm sure some of you might recollect a little bit." Yeah, I've heard of normal distributions." A little more mathematically maybe. But then if your data is not normally distributed, then the approach to that is not at straightforward. So I'm just willing to say that," Yeah, there is much more than that."

Justin: And the beauty, too, is like with that scrubbing process, it can recognize that data section. So that quarter data doesn't fit with the other data, can scrub it out. But then what you think about is, I mean, hopefully not, but we know things change, things go in cycles. Maybe something like that happens again. That data's still part of what's happening behind the scenes from an analysis perspective. So think of it as now, it can detect and say," Hey, things are moving back toward a place that resemble this quarter that happened." And it can do that for you. And one of the things I always say is like," If you can do it, this can do it better."

Bikram: Awesome. There's a question there?

Speaker 11: Yeah. I have a question. In regards to just kind of follow up with the prior question, as well, can you select the years that you want to use? I know that you said minimum three, but let's say that there's a year that's off and you want to just say," Hey, I want to exclude that particular year, but select the three different years." Is that a possibility?

Bikram: Yeah, this is, again, a great question. You see, can you exclude a year from the historical actuals, is the question, right? Now, if it is just a year, I don't think you have to, because the engine will take care of it, for sure. But maybe, here is a situation where I need your inputs. If you say that in the last three, four years, let's say a merger happened or an acquisition happened three, four years back. And then the data before that is like completely different. Now there is this, technically we call it a structural break, in the time series terminology, we call it a structural break. Now there is a structural break three years prior, and you can only consider that three years of data. Now that's something, if you feel that we should have that in the product, we are happy to bake that into the product. But if it is just one year that you want to remove, because you it's a Covid, it's some year that not very relevant, then that's something that the AI engine would do that for you. So you don't have to worry about it.

Speaker 12: Is there a roadmap to integrate Predict into dynamic reports?

Justin: Yes.

Bikram: You mean dynamic planning or dynamic report? You meant dynamic report or dynamic planning?

Speaker 12: A dynamic report.

Bikram: It's-

Justin: So through Spotlight? Are you talking like through Spotlight Reports?

Speaker 12: It could be.

Justin: Yeah. So think of this for a minute, from a roadmap perspective. So you guys use dynamic planning, right? So in dynamic planning, you have unlimited, infinite combinations and dimension structures. You think about time dimension, one of the really key things, when we talk about financial data. Well, in dynamic planning, I might have a week dimension. I might have a month dimension. I might have a day dimension. I might have a separate year dimension. I might put all those together to create the time component. So when we think about how do we take this and extended out to dynamic planning? It's a really interesting question. So first and foremost, what we're working on is making it available through Spotlight for direct connect models. And then those direct connect models, as you know, they mirror the PCR model structure, right? So that's easy. The hard part with dynamic planning is we have to build an entire user interface for you, the user who doesn't know data science, to be able to select the necessary dimension components for the algorithms to work. So when we talk about roadmap, that's the progressive steps on the roadmap to make that happen. Because we could do it today. We could literally turn it on and do it. The problem is we need to productize that interface so that it's easy, so that it doesn't require training components. So one of the things we haven't talked about is the implementation side of this and everything. This is literally just you turn it on and it's working in the application. And so when we think about dynamic planning, especially outside of those direct connect models, we want that experience to be exactly the same as that. So that it just works for you.

Speaker 12: We are very happy to test that out if you want to turn it on.

Justin: Perfect.

Bikram: Awesome.

Justin: We'll hold you to it.

Speaker 13: I just have a question, if you can exclude certain dimensions. Say like, if you had vendors set up in your tenant, but you didn't really want to have that be part of the data science, could you do that? Or is it not needed? IS the AI smart enough to look around that?

Bikram: Yeah, we have like hit that as a roadblock at some point in time, especially for Signals. And we have addressed that. So the question here is, can you ignore a certain dimension? Right? So it is like we were dealing with a customer of ours where they had a dimension named" product year," and then product years are like 2019 model, 2020 model, 21 model. And then if it is a 2021 model, how can you go back and tell your backend that," Hey, give me five years of data of 2021 model." It doesn't work. So it never has historical actuals enough for the AI engine to work. So that was one situation where we came up with ignoring a dimension functionality, where it essentially considers it the root level or the top level element. So the rest of the dimension is ignored that way. So yeah, we have a solution to that.

Genevieve: Okay. So we're at time right now.

Bikram: Awesome.

Justin: Wow, that was fast.

Genevieve: And that was an amazing session, but I know that you guys have more questions. Justin and Bikram are going to be around. They'll be walking. You can go up to them and ask them all your questions. But first give them a round applause.

DESCRIPTION

Planful Predict brings AI-driven superpowers to your planning and analysis yet leaves you always in control. Join Planful’s Justin Merritt, Senior Director, Solutions Consulting, and Bikramaditya Singhal, Principal Product Manager, as they demonstrate Planful’s native predictive intelligence and how it helps you make confident, intelligent, human-led decisions with greater agility and accuracy. The team will show how Planful Predict projects baseline revenue and spend to accelerate your budgeting and planning efforts and proactively prevents and identifies potential data errors. You’ll even learn how Planful Predict can help validate actuals during your close and consolidation process.