FAQs
I feel like I’m being ‘pushed’ into AI. Will this really help?
You are not alone. We recognise that this is currently the position for many so the focus of AI Foundations is to relieve the pressure by giving your organisation the tools, understanding and risk management capabilities to truly validate where AI can, or cannot, add value in your organisation.
Will everyone use AI?
Nearly everybody already is. It’s just embedded into tools you already use. Being less flippant though, there is a short, medium and a long-term view. The short-term view (3-5 years) is that many organisations simply won’t benefit from implementing AI. They either won’t have the data, won’t be willing to take the risk, or won’t be willing to build/buy the capability. The medium-term view is that AI driven experiences and support will be expected by customers and staff. Millennials will make up the largest section of the workforce by 2026 (according to the National Skills Commision of Australia). This generation is expecting tailored experiences and digital tools in their work - two key capabilities that AI enables. Long-term, AI will undoubtedly be embedded in everything. Right now, the important thing is to be able to assess where you can and cannot get benefits and ensure you manage the risk profiles associated with AI supported actions.
We don’t even have our data under control, will this help?
Data Governance is an age-old topic that forms a barrier for many. Indeed, if you move AI models into general use, you need to make sure your data is secure, available, timely and of high enough quality to avoid outcomes such as bias. However, you will also find that a strong AI governance process will help you identify ways of improving your data and incentivise others to support you in doing so. You may also AI to collect data you don’t currently have or collecting to infrequently for it to add value. Either way, the tools, frameworks and understanding you’ll gain from AI Foundations will help you lower the costs of your AI related decisions, lower the risk of adverse outcomes, and empower your peers to both identify opportunities for improvement and to do the work needed to make it possible (such as data improvements).
I don’t think AI is relevant in our organisation ?
You know what? It might not be. But at the moment, all you’re doing is stating that you don’t think it’s relevant. AI Foundations will give you a way of proving it, with people other than yourself explaining why. It will also mean that you have proactively invested in a capability that doesn’t only assess relevance once but allows you to assess ongoing. This will not only make you look good to your executive team but will also allow you to relay the experience to your peers with the collected facts.
Isn’t AI dangerous?
AI allows you to do many things faster and at greater scale. It is dependant upon the data it receives to learn how to process an input and respond with a recommended output. Training takes time and data. If you do not have one or the other, the likelihood of having an issue is high. However, arguably the biggest risk of AI is the applying it in scenarios where the person impacted does not want that experience. Stopping a production line because AI has identified a defective product incorrectly slows the throughput, but removing someone from a recruitment process because they came from a location the AI deems ‘not optimal’ is a nightmare. Strong governance processes will help minimise this risk.
We already have people using ChatGPT and some other tools. Are we putting ourselves at risk?
We will answer this question with a question: Would those same people put corporate information into a Google search? If the answer is no, don’t put that information into ChatGPT. There is plenty of benefits to be had from the online tool, but it is fair to assume that putting sensitive information into ChatGPT is a bit like telling someone you’ve just met, the same information at a party. You don’t know where that information will go next. ChatGPT do everything they can to not let that information go beyond their service and to make sure your data is used only for training the underlying model. Improved education and governance that allows everyone to understand where they contribute to AI adoption in your organisation will help minimise your risk.
How long does this engagement take, how does it work and what do I need to do?
The length of engagement is determined by your organisational size and the number of workshops required. Our focus is on automation, tools and frameworks to make the most of face-to-face time. This allows us to offer a sliding scale of costs based upon the number of groups in your organisation you would like us to engage. For this engagement to be a success, we need you to support by helping identify the key stakeholders and communicating with those stakeholders in advance. We will help you with these communications, but we deem these essential for us to get the best outcomes together. You will also need strong Executive sponsorship. This will help throughout the engagement.
I already work with a Technology company. Can you work with them?
Of course. Just make the introduction and we would be happy to work through that group. Our engagement is split into licensing for some of our tools and access to frameworks so this will still be billed to you directly, but services can be run through your existing Technology partner.