🦾

AI Analyst API

API Documentation

Overview

This API documentation details the available endpoints for interacting with the backend service. Each endpoint is defined with its corresponding HTTP method, URL pattern, and a brief description of its functionality.

Client API Endpoints

POST /project/exploration/conversations/list

Description: Retrieves a list of conversations for the project.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • user_uuid (string): User UUID, could be generated on the client or by 3rd party backend system.
  • descriptor_base_path (string): Path to the descriptor of AI Analyst settings
Example Request:
backendClient.post('/project/exploration/conversations/list?apiKeyUuid={apiKeyUuid}', payload)
Example Payload
{ "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "descriptor_base_path":"225/ai_analyst/217/version/1" }
Example Response:
[ { "uuid": "37bec2cf-060d-48e5-aea5-0369fb6cb2b0", "user_uuid": "2db08c1c-e709-4a6c-ace1-eed41c2b8321", "id": 579, "title": "Predict Income amounts for the next months till the end of the year", "descriptor_base_path": "225/ai_analyst/217/version/1" }, { "uuid": "09e4ece9-f0b3-457d-b884-4d5276a0d514", "user_uuid": "2db08c1c-e709-4a6c-ace1-eed41c2b8321", "id": 561, "title": "What is the total Income, Expenses, and Cost of sales amounts for each month?", "descriptor_base_path": "225/ai_analyst/217/version/1" } ]
 

POST /project/exploration/conversation/item

Description: Retrieves a specific conversation item.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • uuid (string): The UUID of the conversation to retrieve
  • user_uuid (string): User UUID, could be generated on the client or by 3rd party backend system.
  • is_summary_show_tables (bool): if true the AI analyst will return data table in the result of the conversation
Example Request:
backendClient.post('/project/exploration/conversation/item?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "uuid":"37bec2cf-060d-48e5-aea5-0369fb6cb2b0", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "is_summary_show_tables": true }
Example Response:
{ "conversation": { "id": 579, "uuid": "37bec2cf-060d-48e5-aea5-0369fb6cb2b0", "descriptor_base_path": "225/ai_analyst/217/version/1", "updated_at": "2024-07-01T10:57:54.294311+00:00", "user_uuid": "2db08c1c-e709-4a6c-ace1-eed41c2b8321", "title": "Predict Income amounts for the next months till the end of the year", "created_at": "2024-07-01T10:57:54.294311+00:00" }, "messages": [ { "prompt": "Predict Income amounts for the next months till the end of the year", "role": "user", "uuid": "aede54ae-2c3f-4e08-aa67-a5d228f4cf67", "answer": null, "errors": null, "updated_at": "2024-07-01T10:57:54.315608+00:00", "feedback": null, "charts": [], "tables": [] }, { "prompt": "Predict Income amounts for the next months till the end of the year", "role": "system", "uuid": "00cd449c-17d4-40b9-8454-f32ec2c1618b", "answer": "<div class=\"tech-details-start\"/>\n<b><i>Small talk result:</i></b>\nSTART\n<b><i>Selecting artifacts for answering the question:</i></b>\n\n```json\n{\n\"chain_of_thoughts\": \"To answer the question 'Predict Income amounts for the next months till the end of the year', we need to use the financial transactions dataset to identify income-related transactions. The 'Init Dataset - Financial Transactions from QB' contains the necessary transaction data, including the 'Transaction Date', 'Category PL', and 'Amount with sign' columns. We will filter this dataset to include only income-related transactions by matching the 'Category PL' with the '1st level category' from the 'Init Dataset - Charts of Accounts from Quickbooks', where the 'Account type' is 'Income'. We will then use this filtered data to fit a time series forecasting model and predict the income amounts for the remaining months of the year.\",\n\"show_rules_check\": \"The user's question does not contain any request for visualization, so we will follow the general flow and not include any visualization in the modified question.\",\n\"modified_question\": \"Take 'Init Dataset - Financial Transactions from QB' and 'Init Dataset - Charts of Accounts from Quickbooks' datasets. Filter the transactions to include only those where 'Category PL' matches '1st level category' from the accounts dataset and 'Account type' is 'Income'. Use this filtered data to fit an ExponentialSmoothing model and predict the income amounts for the next months till the end of the year.\",\n\"proposed_artifacts_rules\": \"The proposed artifacts should include the 'Init Dataset - Financial Transactions from QB' and 'Init Dataset - Charts of Accounts from Quickbooks' datasets, as these contain the necessary data for filtering and forecasting income transactions.\",\n\"proposed_artifacts\": [\n {\n \"step\": 30977,\n \"artifact\": \"init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5\",\n \"name\": \"Init Dataset - Financial Transactions from QB\",\n \"type\": \"DATAFRAME\"\n },\n {\n \"step\": 30978,\n \"artifact\": \"init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b\",\n \"name\": \"Init Dataset - Charts of Accounts from Quickbooks\",\n \"type\": \"DATAFRAME\"\n }\n]\n}\n```\n<b><i>Preparing operations to be used in classifications:</i></b>\n\n```json\n{\n \"answer\": [\n {\n \"operation\": \"Transaction selection\",\n \"steps_of_scenario\": [\n \"When user asks for Income/Expense then choose appropriate accounts (look at Account type column) and then choose transactions that has match of '1st level category' (from accounts) and 'Category PL' (from transaction)\"\n ],\n \"instructions\": [\n \"When user asks for Income/Expense then choose appropriate accounts (look at Account type column) and then choose transactions that has match of '1st level category' (from accounts) and 'Category PL' (from transaction)\"\n ],\n \"mandatory\": [\n \"When user asks for Income/Expense then choose appropriate accounts (look at Account type column) and then choose transactions that has match of '1st level category' (from accounts) and 'Category PL' (from transaction)\"\n ],\n \"addition_notes\": []\n },\n {\n \"operation\": \"Time Series Forecasting\",\n \"steps_of_scenario\": [\n \"1. Determine the length of HISTORICAL data (len(historical_data))\",\n \"2. Determine the FORECAST period (forecasted_periods) - take it directly from user's request or define according to instructions\",\n \"3. If length of historical data less than 2 (len(historical_data)<4) -> skip the model fitting , because you can't make forecasting\",\n \"4. If HISTORICAL data for prediction LESS OR EQUAL than 2*forecasted_periods add to the model TREND only (trend='add') otherwise - add both trend and seasonality to model (trend='add', seasonal='add')\",\n \"5. Set seasonal_period as min(recommended_seasonal_period, len(historical_data) // 2)\"\n ],\n \"instructions\": [\n \"1. use ExponentialSmoothing model from statsmodels.tsa.api\",\n \"2. Transform datetime variable via pd.to_datetime()\",\n \"3. Assess granularity of the data and make forecasting considering it - do not rise an error, but do necessity operations\",\n \"4. Add forecasting to dataset with actual data, add new column to resulted dataset with values 'actual'/'forecast' correspondingly\",\n \"5. If the user did not specify the forecasting horizon, take the default forecasting period as 1/3 from the total date range\",\n \"6. Make forecasting for the future - [max(date), max(date) + forecasting horizon]\",\n \"7. By default forecast should be made on Actual data\"\n ],\n \"mandatory\": [\n \"1. If max(date) - min(date) > 1.5 year - take seasonal period is equal to one year (365 days if data granularity is 'D') else: take seasonal period is equal to 1/2 of date range.\",\n \"2. Consider the date granularity\"\n ],\n \"addition_notes\": []\n }\n ]\n}\n```\n<b><i>Getting scenario for multiple datasets:</i></b>\n\n```json\n{\n \"sequence\": [\n {\n \"name\": \"Load Data\",\n \"actions\": [\n \"Reference the 'Init Dataset - Financial Transactions from QB' dataset from DF dictionary using key 'step_30977_init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5'\",\n \"Reference the 'Init Dataset - Charts of Accounts from Quickbooks' dataset from DF dictionary using key 'step_30978_init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b'\"\n ]\n },\n {\n \"name\": \"Filter Accounts\",\n \"actions\": [\n \"Filter the accounts dataset to include only rows where 'Account type' is 'Income'\"\n ]\n },\n {\n \"name\": \"Filter Transactions\",\n \"actions\": [\n \"Filter the transactions dataset to include only rows where 'Category PL' matches '1st level category' from the filtered accounts dataset\"\n ]\n },\n {\n \"name\": \"Prepare Data for Forecasting\",\n \"actions\": [\n \"Convert 'Transaction Date' column in the filtered transactions dataset to datetime format using pd.to_datetime()\",\n \"Sort the filtered transactions dataset by 'Transaction Date' in chronological order\",\n \"Aggregate the filtered transactions dataset by month, summing the 'Amount with sign' column to get monthly income amounts\"\n ]\n },\n {\n \"name\": \"Time Series Forecasting\",\n \"actions\": [\n \"Determine the length of historical data (number of months in the aggregated dataset)\",\n \"Set the forecast period to 8 months (from May 2024 to December 2024)\",\n \"If the length of historical data is less than 4, skip the model fitting\",\n \"If the length of historical data is less than or equal to 2 times the forecast period, set the model parameters to trend='add'\",\n \"Otherwise, set the model parameters to trend='add' and seasonal='add'\",\n \"Set the seasonal period to the minimum of the recommended seasonal period and half the length of historical data\",\n \"Fit an ExponentialSmoothing model to the historical data\",\n \"Generate forecasts for the next 8 months\"\n ]\n },\n {\n \"name\": \"Combine Actual and Forecast Data\",\n \"actions\": [\n \"Combine the historical data and the forecast data into a single dataset\",\n \"Add a new column to the combined dataset with values 'actual' for historical data and 'forecast' for forecast data\"\n ]\n },\n {\n \"name\": \"Visualization\",\n \"actions\": [\n \"Create a line plot using plotly to visualize the actual and forecasted income amounts over time\",\n \"Set the x-axis to 'Transaction Date' and the y-axis to 'Amount with sign'\",\n \"Differentiate between actual and forecasted data using different colors or line styles\"\n ]\n }\n ]\n}\n```\n<b><i>Generating code, attempt #1:</i></b>\n\n\n```python\ndef func(DF):\n # Import Libraries\n import pandas as pd\n import numpy as np\n from statsmodels.tsa.api import ExponentialSmoothing\n import plotly.express as px\n import copy\n\n # Step 1: Copy the data\n DF = copy.deepcopy(DF)\n \n # Step 2: Reference the datasets\n transactions = DF[\"step_30977_init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5\"]\n accounts = DF[\"step_30978_init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b\"]\n \n # Step 3: Filter the accounts dataset to include only rows where 'Account type' is 'Income'\n income_accounts = accounts[accounts['Account type'] == 'Income']\n \n # Step 4: Filter the transactions dataset to include only rows where 'Category PL' matches '1st level category' from the filtered accounts dataset\n filtered_transactions = transactions[transactions['Category PL'].isin(income_accounts['1st level category'])]\n \n # Step 5: Convert 'Transaction Date' column in the filtered transactions dataset to datetime format\n filtered_transactions['Transaction Date'] = pd.to_datetime(filtered_transactions['Transaction Date'])\n \n # Step 6: Sort the filtered transactions dataset by 'Transaction Date' in chronological order\n filtered_transactions = filtered_transactions.sort_values(by='Transaction Date')\n \n # Step 7: Aggregate the filtered transactions dataset by month, summing the 'Amount with sign' column to get monthly income amounts\n filtered_transactions['Month'] = filtered_transactions['Transaction Date'].dt.to_period('M')\n monthly_income = filtered_transactions.groupby('Month')['Amount with sign'].sum().reset_index()\n monthly_income['Month'] = monthly_income['Month'].dt.to_timestamp()\n \n # Step 8: Determine the length of historical data (number of months in the aggregated dataset)\n historical_length = len(monthly_income)\n \n # Step 9: Set the forecast period to 8 months (from May 2024 to December 2024)\n forecast_period = 8\n \n # Step 10: Check if the length of historical data is less than 4, skip the model fitting\n if historical_length < 4:\n result = {\"question\": \"Forecasting Income\",\n \"answer\": {\"message\": {\"name\": \"Insufficient Data\",\n \"dtype\": \"scalar\",\n \"data\": \"Not enough historical data to fit the model.\",\n \"category\": \"main\",\n \"description\": \"The historical data is less than 4 months, which is insufficient to fit the model.\"}}}\n return result\n \n # Step 11: Set the model parameters based on the length of historical data\n if historical_length <= 2 * forecast_period:\n trend = 'add'\n seasonal = None\n else:\n trend = 'add'\n seasonal = 'add'\n \n # Step 12: Set the seasonal period\n seasonal_period = min(12, historical_length // 2)\n \n # Step 13: Fit an ExponentialSmoothing model to the historical data\n model = ExponentialSmoothing(monthly_income['Amount with sign'], trend=trend, seasonal=seasonal, seasonal_periods=seasonal_period)\n fitted_model = model.fit()\n \n # Step 14: Generate forecasts for the next 8 months\n forecast = fitted_model.forecast(forecast_period)\n forecast_dates = pd.date_range(start=monthly_income['Month'].max() + pd.offsets.MonthBegin(1), periods=forecast_period, freq='MS')\n forecast_df = pd.DataFrame({'Month': forecast_dates, 'Amount with sign': forecast, 'Type': 'forecast'})\n \n # Step 15: Combine the historical data and the forecast data into a single dataset\n monthly_income['Type'] = 'actual'\n combined_data = pd.concat([monthly_income, forecast_df], ignore_index=True)\n \n # Step 16: Create a line plot using plotly to visualize the actual and forecasted income amounts over time\n fig = px.line(combined_data, x='Month', y='Amount with sign', color='Type', title='Actual and Forecasted Income Amounts')\n \n # Step 17: Prepare the result\n result = {\"question\": \"Forecasting Income\",\n \"answer\": {\"combined_data\": {\"name\": \"Combined Actual and Forecasted Income Data\",\n \"dtype\": \"dataframe\",\n \"data\": combined_data,\n \"category\": \"main\",\n \"description\": \"This dataset includes both the actual and forecasted income amounts, with a new column indicating whether the data is actual or forecasted.\"}},\n \"plots\": [{\"name\": \"Actual and Forecasted Income Amounts\",\n \"plot\": fig,\n \"description\": \"This line plot visualizes the actual and forecasted income amounts over time, differentiating between actual and forecasted data using different colors.\"}]}\n \n return result\n```\n<div class=\"tech-details-end\"/>\n\nThe analysis of the filtered financial transactions, where the 'Category PL' matches the '1st level category' from the accounts dataset and the 'Account type' is 'Income', led to the fitting of an ExponentialSmoothing model to predict future income amounts; the resulting dataset, \"Combined Actual and Forecasted Income Data,\" includes both actual and forecasted income amounts, with a clear distinction between the two types, showing actual income amounts for months such as January 2023 ($222.01), April 2023 ($354.00), July 2023 ($1193.00), October 2023 ($916.00), and March 2024 ($810.00), while forecasted income amounts for future months include June 2024 ($901.15), July 2024 ($934.21), August 2024 ($967.27), October 2024 ($1033.39), and November 2024 ($1066.45); the accompanying line plot, \"Actual and Forecasted Income Amounts,\" visually differentiates between actual and forecasted data, providing a clear trend analysis over time, which is crucial for financial planning and decision-making.\n\n ### Combined Actual and Forecasted Income Data\n| Month | Amount with sign | Type |\n|------------|--------------------|--------|\n| 2023-01-01 | 222.01 | actual |\n| 2023-02-01 | 569.22 | actual |\n| 2023-03-01 | 413.47 | actual |\n| 2023-04-01 | 354 | actual |\n| 2023-05-01 | 776.95 | actual |\nTable is too big, only first 5 rows are shown\n\n\n[Download full CSV](https://ai-analyst-api.testing.datrics.ai/project/exploration/result?file_path=225/ai_analyst/217/version/1/explore/6c6e017c-b947-4c0e-ac75-194622c2cb8a/datasets/1ba7cafa-db8f-4a19-af32-d01e927c47e7/result.csv)", "errors": null, "updated_at": "2024-07-01T10:57:54.649113+00:00", "feedback": null, "charts": [ { "title": "Actual and Forecasted Income Amounts", "path": "225/ai_analyst/217/version/1/explore/6c6e017c-b947-4c0e-ac75-194622c2cb8a/charts/0.html" } ], "tables": [ { "title": "combined_data", "path": "225/ai_analyst/217/version/1/explore/6c6e017c-b947-4c0e-ac75-194622c2cb8a/datasets/1ba7cafa-db8f-4a19-af32-d01e927c47e7/result.csv" } ] } ] }
 

POST /project/exploration/conversation/create

Description: Creates a new conversation or new message in the conversation
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • conversation (object): The information about conversation
    • uuid (string): The UUID of the new conversation (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • descriptor_base_path (string): Path to the descriptor of AI Analyst settings
  • message (object): The information about first message in the conversation
    • uuid (string): The UUID of the new message (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • conversation_uuid (string): The UUID of the conversation message sent to
    • prompt (string): User’s message
Example Request:
backendClient.post('/project/exploration/conversation/create?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "conversation": { "uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "descriptor_base_path":"225/ai_analyst/217/version/1" }, "message": { "uuid":"52cd6013-440c-4017-8dde-1a7ba7700cab", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "conversation_uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "prompt":"What is the Net Profit amount for each month?" } }

POST /project/exploration/conversation/task

Description: Creates task for agent to answer the question
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • conversation (object): The information about conversation
    • uuid (string): The UUID of the new conversation (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • descriptor_base_path (string): Path to the descriptor of AI Analyst settings
  • message (object): The information about first message in the conversation
    • uuid (string): The UUID of the new message (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • conversation_uuid (string): The UUID of the conversation message sent to
    • prompt (string): User’s message
Example Request:
backendClient.post('/project/exploration/conversation/task?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "conversation": { "uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "descriptor_base_path":"225/ai_analyst/217/version/1" }, "message":{ "uuid":"5e736a34-91d3-4eca-ae7a-3a22cbfdc324", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "conversation_uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "prompt":"What is the Net Profit amount for each month?", } }
Response: Backend will return json response with task_id to poll for the result

Example of Response

{ "task_id": "bla50b40-0187-4e5b-aaa7-ec73fdb97fbe" }

GET /project/exploration/conversation/task/{task_id}

Description: Returns status of the given task with the result
Parameters:
  • apiKeyUuid (string): The API key UUID.
  • task_id (string): The task_id retrieved from the create task endpoint
Example Request:
backendClient.get('/project/exploration/conversation/task/bla50b40-0187-4e5b-aaa7-ec73fdb97fbe?apiKeyUuid={apiKeyUuid}')
Response: The backend will return a JSON response with the current status of the task or the result.

Example of Response

if the task is running:
{ "status": "STARTED" }
if the task is failed:
{ "status": "STARTED", "failure": "Error goes here" }
if the task is successfully finished, you will receive a summary and path to charts or datasets. Use /download endpoint to download these files.
The resulting dataset will be in Apache Parquet format
{ "answer": { "summary": "The average monthly salary after taxes for various IT occupations is as follows: Freelance workers earn $2,888.82, full-time employees receive $3,191.97, those laid off make $2,540.04, individuals on paid bench earn $3,230.98, part-time workers get $1,845.05, those on partly paid bench receive $2,451.07, temporarily unemployed individuals (such as those on sabbatical or parental leave) ear n $2,805.03, those on unpaid vacation make $2,949.53, and individuals who previously worked in IT but are now in military service earn $2,737.81.", "charts": [ { "title": "avg_salary_per_occupation", "path": "224/ai_analyst/434/version/1/explore/c8a17470-7b65-4428-a27-fc8d3a4b9eb3/charts/e8a2e038-f270-40c2-a1c9-82d9a86344b/result.html" } ], "tables": [ { "title": "avg_salary_per_occupation", "path": "224/ai_analyst/434/version/1/explore/c8a17470-7b65-4428-a27-fc8d3a4b9eb3/datasets/e8a2e038-f270-40c2-a1c9-82d9a86344b/result.parquet" }, { "title": "avg_salary_per_occupation", "path": "224/ai_an alyst/ 434/version/1/explore/b1a50b40-0187-4e5b-aaa7-ec73fdb97fbe/datasets/7a22d237-7573-451b-83bd-6f461545739/result.parquet" } ] } }

POST /project/exploration/conversation/stream

Description: Streams a conversation.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • conversation (object): The information about conversation
    • uuid (string): The UUID of the new conversation (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • descriptor_base_path (string): Path to the descriptor of AI Analyst settings
  • message (object): The information about first message in the conversation
    • uuid (string): The UUID of the new message (should be auto-generated on client side)
    • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
    • conversation_uuid (string): The UUID of the conversation message sent to
    • prompt (string): User’s message
Example Request:
backendClient.post('/project/exploration/conversation/stream?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "conversation": { "uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "descriptor_base_path":"225/ai_analyst/217/version/1" }, "message":{ "uuid":"5e736a34-91d3-4eca-ae7a-3a22cbfdc324", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "conversation_uuid":"6dfe831d-fe9a-431a-b212-82968e875d50", "prompt":"What is the Net Profit amount for each month?", } }
Response: Backend will return streaming response with text

Example of Response

<b><i>Task identifier:</i></b> ```json { "task_id": "9c251113-2f31-4c7d-a3e9-323839c8bcb4" } ``` <div class="tech-details-start"/> <b><i>Small talk result:</i></b> START <b><i>Selecting artifacts for answering the question:</i></b> ```json { "chain_of_thoughts": "To answer the question 'What is the Net Profit amount for each month?', we need to calculate the net profit using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses. The available datasets are 'Init Dataset - Financial Transactions from QB' and 'Init Dataset - Charts of Accounts from Quickbooks'. We need to use both datasets to filter and categorize the transactions correctly. The 'Init Dataset - Financial Transactions from QB' contains the transaction details, while the 'Init Dataset - Charts of Accounts from Quickbooks' provides the account types and categories. We will use the 'Transaction Date' column to group the data by month and calculate the net profit for each month.", "show_rules_check": "The user's question does not contain any request for visualization. Therefore, we will follow the general flow and not include any visualization in the modified question.", "modified_question": "Take 'Init Dataset - Financial Transactions from QB' and 'Init Dataset - Charts of Accounts from Quickbooks' datasets created at steps #32477 and #32478. Calculate the Net Profit amount for each month using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses. Use the 'Transaction Date' column to group the data by month.", "proposed_artifacts_rules": "The proposed artifacts should be consistent with the modified question. We need both datasets 'Init Dataset - Financial Transactions from QB' and 'Init Dataset - Charts of Accounts from Quickbooks' to perform the calculations.", "proposed_artifacts": [ { "step": 32477, "artifact": "init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5", "name": "Init Dataset - Financial Transactions from QB", "type": "DATAFRAME" }, { "step": 32478, "artifact": "init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b", "name": "Init Dataset - Charts of Accounts from Quickbooks", "type": "DATAFRAME" } ] } ``` <b><i>Preparing operations to be used in classifications:</i></b> ```json { "answer": [ { "operation": "Net profit", "steps_of_scenario": [ "To calculate net profit use formula: Net profit = Income + Expenses + Cost of sales + other income and expenses" ], "instructions": [ "To calculate net profit use formula: Net profit = Income + Expenses + Cost of sales + other income and expenses" ], "mandatory": [], "addition_notes": [] } ] } ``` <b><i>Getting scenario for multiple datasets:</i></b> ```json { "sequence": [ { "name": "Load Data", "actions": [ "Reference the 'Init Dataset - Financial Transactions from QB' dataset from DF using the key 'step_32477_init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5'", "Reference the 'Init Dataset - Charts of Accounts from Quickbooks' dataset from DF using the key 'step_32478_init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b'" ] }, { "name": "Filter Relevant Transactions", "actions": [ "Identify the '1st level category' values in the 'Init Dataset - Charts of Accounts from Quickbooks' that correspond to 'Income', 'Expenses', 'Cost of sales', 'Other income', and 'Other expense' in the 'Account type' column", "Filter the 'Init Dataset - Financial Transactions from QB' dataset to include only transactions where 'Category PL' matches the identified '1st level category' values" ] }, { "name": "Prepare Data for Net Profit Calculation", "actions": [ "Convert the 'Transaction Date' column in the filtered transactions dataset to datetime format if not already in that format", "Group the filtered transactions by month using the 'Transaction Date' column", "Sum the 'Amount with sign' for each month to get the total amount for each 'Category PL' within each month" ] }, { "name": "Calculate Net Profit", "actions": [ "For each month, calculate the Net Profit using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses", "Ensure that the calculation correctly adds or subtracts the amounts based on the 'Category PL' values" ] }, { "name": "Format and Sort Results", "actions": [ "Format the results to show the Net Profit for each month", "Sort the results in chronological order by month" ] }, { "name": "Visualization (if required)", "actions": [ "Use plotly to create a line chart showing the Net Profit for each month", "Label the x-axis as 'Month' and the y-axis as 'Net Profit'" ] } ] } ``` <b><i>Generating code, attempt #1:</i></b> ```python def func(DF): # Import Libraries import pandas as pd import numpy as np import plotly.express as px # Step 1: Load datasets transactions = DF["step_32477_init_dataset_d8bddca9-84ec-424e-8741-80a883e8b9f5"] accounts = DF["step_32478_init_dataset_0666fd77-7fa2-41de-a801-bba4cf854b6b"] # Step 2: Identify relevant '1st level category' values relevant_account_types = ['Income', 'Expenses', 'Cost of sales', 'Other income', 'Other expense'] relevant_categories = accounts[accounts['Account type'].isin(relevant_account_types)]['1st level category'].unique() # Step 3: Filter transactions based on 'Category PL' filtered_transactions = transactions[transactions['Category PL'].isin(relevant_categories)] # Step 4: Convert 'Transaction Date' to datetime if not already filtered_transactions['Transaction Date'] = pd.to_datetime(filtered_transactions['Transaction Date']) # Step 5: Group by month and sum 'Amount with sign' filtered_transactions['Month'] = filtered_transactions['Transaction Date'].dt.to_period('M') monthly_sums = filtered_transactions.groupby(['Month', 'Category PL'])['Amount with sign'].sum().reset_index() # Step 6: Pivot to get a table with months as rows and categories as columns monthly_pivot = monthly_sums.pivot(index='Month', columns='Category PL', values='Amount with sign').fillna(0) # Step 7: Calculate Net Profit for each month monthly_pivot['Net Profit'] = ( monthly_pivot.get('Income', 0) + monthly_pivot.get('Expenses', 0) + monthly_pivot.get('Cost of sales', 0) + monthly_pivot.get('Other income', 0) + monthly_pivot.get('Other expense', 0) ) # Step 8: Reset index to get 'Month' as a column and sort by 'Month' net_profit_df = monthly_pivot.reset_index() net_profit_df['Month'] = net_profit_df['Month'].dt.to_timestamp() net_profit_df = net_profit_df.sort_values(by='Month') # Step 9: Create a line chart for Net Profit fig = px.line(net_profit_df, x='Month', y='Net Profit', title='Net Profit by Month', labels={'Month': 'Month', 'Net Profit': 'Net Profit'}) # Result result = { "question": "Calculate the Net Profit amount for each month using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses.", "answer": { "net_profit_by_month": { "name": "Net Profit by Month", "dtype": "dataframe", "data": net_profit_df, "category": "main", "description": "This dataframe shows the Net Profit for each month, calculated using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses. The data is grouped by month and sorted in chronological order." } }, "plots": [ { "name": "Net Profit by Month", "plot": fig, "description": "This line chart visualizes the Net Profit for each month. The x-axis represents the months, and the y-axis represents the Net Profit." } ] } return result ``` <div class="tech-details-end"/> The analysis of the financial data reveals that the Net Profit for each month was calculated using the formula: Net profit = Income + Expenses + Cost of sales + other income and expenses, and the data was grouped by month and sorted chronologically; however, the Net Profit for all months listed (January 2023, March 2023, April 2023, May 2023, July 2023, August 2023, September 2023, November 2023, January 2024, and May 2024) consistently resulted in a value of zero, indicating that the total income and expenses balanced out to zero for each of these months, with notable expenses including Executive salary staff, Marketing media, and Freelancer costs being significant contributors to the overall financial activity, while the Revenue - General was relatively low in comparison, suggesting a need for a detailed review of expense management and revenue generation strategies.

Parsing text response

The client must get a task identifier from the response to be able to cancel the task. Here is the piece of text from the response with Task Identifier. Task identifier always goes as a first json in the response.
<b><i>Task identifier:</i></b> ```json { "task_id": "9c251113-2f31-4c7d-a3e9-323839c8bcb4" } ```
Then, in the response, the backend sends tech details; everything between tech-details-start and tech-details-end tags is optional and just represents the log of an agent with the Python code generated and chain of thoughts .
<div class="tech-details-start"/> .... <div class="tech-details-end"/>
The client can just ignore everything between these 2 tags.
Then the result will be after tech-details-end
After the stream is finished, the client requests run /generate-charts and /generate-tables endpoints to retrieve charts and data tables from the response

POST /project/exploration/conversation/stream/stop

Description: Cancels a task.
Parameters:
  • task_id (string): The task ID.
  • apiKeyUuid (string): The API key UUID.
Example Request:
backendClient.post('/project/exploration/conversation/stream/stop?task_id={task_id}&apiKeyUuid={apiKeyUuid}', payload)
 

POST /project/exploration/generate-charts

Description: Generates charts for the project.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • uuid (string): The UUID of the message sent by user
  • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
  • conversation_uuid (string): The UUID of the conversation message sent to
Example Request:
backendClient.post('/project/exploration/generate-charts?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "uuid":"5e736a34-91d3-4eca-ae7a-3a22cbfdc324", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "conversation_uuid":"6dfe831d-fe9a-431a-b212-82968e875d50" }
Example Response:
{ "charts": [ { "path": "225/ai_analyst/217/version/1/explore/9c251113-2f31-4c7d-a3e9-323839c8bcb4/charts/0.html", "title": "Net Profit by Month" } ] }
If charts were generated by the AI Agent, then the response will contain the path to download HTML with the chart.

POST /project/exploration/generate-tables

Description: Generates tables for the project.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Payload:
  • uuid (string): The UUID of the message sent by user
  • user_uuid (string): The User UUID, could be generated on the client or by 3rd party backend system.
  • conversation_uuid (string): The UUID of the conversation message sent to
Example Request:
backendClient.post('/project/exploration/generate-tables?apiKeyUuid={apiKeyUuid}', payload)
Example Payload:
{ "uuid":"5e736a34-91d3-4eca-ae7a-3a22cbfdc324", "user_uuid":"2db08c1c-e709-4a6c-ace1-eed41c2b8321", "conversation_uuid":"6dfe831d-fe9a-431a-b212-82968e875d50" }
Example Response:
{ "tables": [ { "path": "225/ai_analyst/217/version/1/explore/9c251113-2f31-4c7d-a3e9-323839c8bcb4/datasets/e5455235-9786-4379-8983-3819aa8e1859/result.csv", "title": "Net Profit by Month" } ] }
If data tables were generated by the AI Agent, then the response will contain the path to download CSV with the data.

POST /download

Description: Downloads chart or csv file generated by AI Agent
Parameters:
  • path (string): The path to file that should be downloaded
  • apiKeyUuid (string): The API key UUID.
Example request:
https://api.testing.datrics.ai/api/v1/download?path=225/ai_analyst/217/version/1/explore/9c251113-2f31-4c7d-a3e9-323839c8bcb4/charts/0.html&apiKeyUuid=yhegyCmy.YvUBnYDBM3NYoGV0DIbdH5zoFy5BA3CF
Response:
The response will return text/html or text/csv depending on the file

Extended API Endpoints

GET /project

Description: Fetches project details.
Parameters:
  • descriptor_base_path (string): The base path for the descriptor.
  • apiKeyUuid (string): The API key UUID.
Example Request:
backendClient.get('/project?descriptor_base_path={descriptor_base_path}&apiKeyUuid={apiKeyUuid}')

POST /project/exploration/feedback

Description: Sends feedback for the project.
Parameters:
  • apiKeyUuid (string): The API key UUID.
Example Request:
backendClient.post('/project/exploration/feedback?apiKeyUuid={apiKeyUuid}', payload)
 
 

Webhook is coming soon

Â