Hello, what can we help you find?

Rendiamo la normativa accessibile a tutti con la più evoluta piattaforma
di Intelligenza Artificiale dedicata alla compliance aziendale.

The Aptus.AI Online Documentation

Consultation-document

View All

Consultation-search engine

View All
Getting started
Getting started

Yes. We implement a classic multi-tenancy approach, on

Regulatory Outlook

View All
Getting Started
Getting Started

Before building your site, better take a look

Alerting

Charles is a creative WordPress theme for saas, software, startup, mobile app, agency, and related

View All
Getting Started
Getting Started

Before building your site, better take a look

Our main objective is to support our clients in the regulatory change management process. What our platform currently support is the following flow: when a client has included a specific document into a Legal Inventory and that document is object of a regulatory change, the user is able to see the compare between the two versions in a track change mode and has the possibility to eventually update the legal inventory accordingly.

Our Alert is a near-real time service. We send daily alerts with a delay of max 2 days with respect to the official publication date. More specifically, the limit case is to receive the alert 8 working hours after the publication on the legal source (es. friday afternoon publications received monday morning)

We have a default regulatory area taxonomy, which we evolve overtime to meet customers’ needs. Our approach is based on a very detailed taxonomy which allows our users to customize the alerts upon their needs and actively use our search engine to find the document of interest and to make regulatory research. However, if needed, we offer our support in order to reconcile the taxonomy and make sure the client can gain added value from the platform and its features in using it in his routine.

The complexity of the activity strictly depends on the legal sources of interest and, more specifically, on the number of new issuing bodies/legal sources to be integrated, on the publication channels to be integrated (i.e. the specific website pages to be monitored) and on the publishing features used by the local Authorities (such as languages made available or documents formats) . Our approach implies the execution of a data integration assessment (which takes maximum 2 weeks) which produces as output the integration feasibility and the project timeline.

Yes of course we can do this. The level (and timing) of effort depends on the specific legal source you want to add. We have a list of document types supported that should be considered in order to evaluate what of this new legal sources will be available in the platform. Also, the content of the legal source can be private and in that case we need to evaluate the terms of conditions to avoid breach of your agreement with the provider.

We have a content intake pipeline made up of scrapers, crawlers, and our AI models that are able to monitor 100% of the relevant documents published by the official monitored legal sources. We guarantee 100% of coverage thanks to our human-in-the-loop system: we crawl all the website; then all the documents are classified as relevant and not relevant with a mixed AI and human-in-the-loop system; the relevant ones, are going through the AI models pipeline that will give the machine-readable equivalent format as output.

The performance of the chat, in terms of precision, completeness and accuracy of the question, strictly depends on the prompt. In the research field, the way in which an LLM can be evaluated end-to-end is still an open point. However, in our technology, the performance of the chat strictly depends on the precision of the AI models creating the MRF of regulatory sources: the underlying context used by the Gen AI is represented only by our MRF, hence it is not necessary to evaluate the precision of the chat but rather the capability to process the correct information to generate an answer the user’s question. We carry out this evaluation empirically, by means of UAT and human-in-the-loop

No, the basic package is alert+consultation only.

Gen AI re-elaborates the underlying context. In the case of Daitomic Chat the context is limited to regulatory documents, so the chat is able to rework this content depending on the prompt received.

We do not use our clients’ data to train our algorithms, which are trained with internally produced data. However, the Daitomic Chat gives the possibility to the user to express a positive (thumb up) or negative (thumb down) feedback which we take into account when working on a new development of the Gen AI model.

(Internal note: this approach could change overtime)

The key to freeup answers of hallucinations is the machine-readable format that we have available inside our platform. Basically, as soon as you type a question, our GenAI chat is searching for the most relevant regulatory context that is needed to answer the question. The user can double check the content that was used to answer the question, to be always sure about the correctness of the information behind the answer. Considering only information that are coming from regulations, the GenAI chat can’t have hallucinations cause the answers is based on precise information

Yes, we use GPT-4o APIs.

The application of AI technologies to images and tables is a challenge currently faced by all market players in this sector. We have already won the first battle, which is the capability to transpose images and tables into our MRF. With particular reference to tables, currently we are working in order to improve the application of Gen AI which goes from the capability to extract obligations/requirements from tables format – considering that  a lot of technical regulatory documents include tables, this is one of our priorities.

We measure the precision with the best practice of academic research. You can find some of our academic work in peer reviewed journals in the public Google Scholar database at the name of our founders. The precision of our AI models are below the 97% as whole measurements. (Internal note: Dev team can better argue on that, also consider that we are iterating on an internal measurement system).

We continuously improve our AI algorithm by means of annotations made by our compliance and legal experts. From a technical point of view, we do not apply any interpretations, our algorithms are trained on the basis of objective parameters such as the hierarchy of regulatory sources and texts’ morphology: regulators uses specific forms when formulating an obligation, a sanction or a definition – our AI models understand this modus scribendi, not only keywords (es. “must” or “shall”). We have a pipeline of continuous delivery of AI models which from one side performs automatic training activities in the light of new data (internal note: this training is automatic because it is unsupervised because once the model has been trained the first time, each time a new document is processed the data increase; nevertheless we have a human-in-the-loop process represented by a team analyzing data and recognizes when there is a sufficient sample to improve the performance by doing a new release) and from the other avoid regressions (internal note: a regression is represented by performance shortfall of between 2 releases, es. today we successfully classify an obligation but after a new release that obligation is not anymore classified correctly, even in the case in which the %level of the performance is unchanged) by performing continuous testing and enriching the testing dataset with a new gold standard for each release.

Our proprietary AI models exploits the way the regulators write down regulation content. The characteristics of legal documents is that the way something is written is characterizing also its semantic meaning: our AI models are trained to recognize the special word sequences that represents an obligation or a sanction, without making any interpretation of the content.

For sure we can upload private documents provided by the client for which he has interest to consult them through the platform. Whereas the possibility to monitor a private source needs to be evaluated case by case in order to avoid breach providers’ T&C.

Our platform also supports soft law. We also monitor consultations and Q&A documents. The possibility to integrate answers to consultations also depends on the way in which the specific Authority manages this kind of publication so we should evaluate the specific case.

Yes, because the pricing is not related to the implementation but to the prioritization of the integration of the relevant issuing bodies. That’s because we have a roadmap for the expansion of our database so the possibility to intervene on our roadmap to change the priorities comes at a cost.

Yes. We implement a classic multi-tenancy approach, on the basis of which we create separate tenants for each user thus isolating the private documents. However our mission is to create the “regulatory google” so we want to give access to all customers to our entire database.

(Internal note: We will of course re-evaluate this approach when we enter into a new industry, to make sure our users can easily exploit our search engine without “noise” coming from regulations not relevant for them). 

We are regulatory area agnostic, meaning that our technology can be used on whatever document issued by whatever Authority. However, in light of our history and of our commercial strategy, as of today we are focused on financial industry, hence some aspects of our platform have been developed on the basis of financial institutions compliance methodology (such as the regulatory areas taxonomy).

Our proprietary technology is language independent and some features, such as the Daitomic Chat (our Gen AI technology), make it possible to produce output independently from the input language (es. I could use the Daitomic Chat to converse with a document in English by asking questions in Italian and receiving the answe in French), the front-end instead as of today is English or Italian only but we have already in pipeline the translation in the relevant language. However, a specification is needed for the AI models, because there are some features that are not generalizable, for instance external references.

That’s why when we enter for the first time in a new country we need to train our algorithms to ensure the same level of performance: that’s an activity we can easily do and which we did when we got our first foreign customer (from Luxembourg). In any case we can easily retrieve english versions if available.

(Internal note: in the moment in which we will be active in different countries with different languages, each language will have a separated pricing)

We are open to evaluate integrations to other systems. Our approach is not to give our APIs but to work together with our clients in order to make sure the users can benefit the most from our platform by implementing it in their daily routines and operations. However, on the basis of our experience, even in case of opening our APIs, every integration is an ad hoc project where feasibility analysis is needed, particularly to understand how to integrate every piece of information. To be noted that our partnership (for instance Deloitte-Archer) can ease these activities.

Daitomic provide access to original regulatory sources. We do not provide legal advice. However, we can help you excel using our platform guiding you through the configuration process of special legal inventories or other information that are relevant to your business.

We host our server on AWS, in Ireland (Europe).

We have a proprietary AI model which exploits open source technologies: our platform is based on a patent already issued in Italy (in 2023) and under evaluation in Europe and US. More specifically, the patent has as object the whole methodology used to transform documents in a machine-readable-format.

The Impact Analysis service requires the activation of the Legal Inventory module as well, this is because with this functionality the user has the possibility to control the regulatory context on which he wants to run the Impact Analysis against the complete set of internal normatives uploaded. Whereas the Legal Inventory can be activated without Impact Analysis.

Few minutes.

We can update the documents for you as soon as you publish a new version of them. There are two main ways: i) generate an automatic upload information flow and ii) periodically send update processed by our operations team and uploaded into the platform. For the current customers, we have agreed on a maintenance cost that they pay annual to update all the internal policies continuously. The same is for taxonomies.

Yes of course we can. However, to have it available in the impact analysis, you should have this taxonomy to be connected to the internal policies. The more they are connected in a granular way, the best precision of impact analysis you will receive back.

Completely. We can customize the priority drivers on the basis of client’s internal methodology, we just need to configure both the drivers and their linear combination generating the priority data point.