This is another confusion.
Once we visit my Data pane, it is displayed as a card style like this.
With other AI agent, data pane is presented as tabular mode. How is this difference coming?
This is another confusion.
Once we visit my Data pane, it is displayed as a card style like this.
With other AI agent, data pane is presented as tabular mode. How is this difference coming?
This may bring us another confusion possibly. Why the user need to hit Finish for each step? I expected ai agent will run through the step one by one wihtout interaction of the users.
There are options at the bottom of the pane to see the data in different views â tabular view, grid/card view, Kanban.
For something that is mean to show images/files, we default to the grid/card view for better visibility. For other data, we default to a the table/spreadsheet view.
Thatâs how we originally had it configured. And that works fine if the agent is perfect in its work. But if the agent has any errors or needs human verification, then it creates problems if the agent runs ahead at full steam building on top of earlier mistakes. And AI work has $$ costs also, so we donât want to get customers upset with runaway work.
So thatâs why we default data to start in Draft mode (so the agent doesnât start until you explicitly tell it to get started on one data row), and thatâs also why the agent doesnât automatically decide when the task is fully completed. We do have that option to give it the flexibility to do so, but it is hidden for now.
But very much, our intention is to enable automatic flow through once a thunk has been tested and is performing as expected.
Btw, one thing Koichi is that I observe that all your thunks have very long descriptions.
Please notice that there is the ability to provide both a âgoalâ and âdetails/processâ. Our recommendation is to have a shorter goal (like all the examples) and put the longer description into the details/process. That makes the UI more readable.
Understood notion behind. Yes, once we build up new agent, then first stage should be testing phase. So it is reasonable to run step one by one. But once we figure out the agent works as expect and perfectly, then we should have a choice to run ALL the task without users intervention or involvement.
Thank you.
We welcome any tips to build AI agent. I expected the instruction to the AI should be detailed as much as possible so that the agent can handle advanced and complex workflows.
The name should be simple, i understood. but what is the best practise to instruct the agent to build complex workflows?
The description doesnât have to all be provided at the beginning. If you use the âWorkflowâ pattern (when you create a thunk, you are prompted to use TaskList, File Folder, Spreadsheet, or Workflow patterns), then you can pause at each stage of planning and refine the plan. You can do this either by directly editing the plan, or by telling the agent to change something or redo something. So that is for the rich/complex use case, whereas the other patterns are simpler
@Koichi_Tsuji , the option to automatically run the agent is implemented and will be rolled out this week. And we are also building an easy way to connect it to external systems (including AppSheet).
So in short â you will be able to send a new data entry to a thunk from an external application flow, process it automatically using AI, and then send it back or send it onward to a different service. All 100% done by AI without any human intervention. Details shortly
Very much interesting and cant wait for test it!! !
Is that API type of integration I assume. (Can get the response from Thunk.AI by calling POST request, instead of Webhook (just throw a request).