Postfinance

↓  Open

PostFinance Challenge

Right after cloning the repository, run:

git lfs install
git submodule init
git submodule update

Revolutionize API Testing in Banking with GenAI

Case Introduction:

We invite you to leverage Generative AI to transform the way APIs are tested in the banking sector. By automating test case generation and debugging, GenAI promises to streamline developers' workflows, enhancing both efficiency and quality. We challenge you to envision innovative GenAI applications for end-to-end testing that make use of precise standards and use case definitions to intelligently suggest improvements during the testing phase. By automating test case generation and debugging, this solution not only eases the workload of developers but also ensures that business requirements are accurately reflected in the testing process, thereby improving overall product quality and efficiency.

What is the current problem?

In the banking sector, API testing is a time-consuming and often manual process. Developers spend significant time writing and debugging test cases, often without comprehensive specifications. This manual approach not only consumes time but also leads to inefficiencies and overlooks critical details. With this project, we aim to disrupt manual debugging processes, demonstrating the value of comprehensive specifications in improving testing efficiency and quality. By automating test case generation and debugging using Generative AI, we seek to close these gaps and enhance the overall quality of testing services in the banking sector.

Who are the users of this solution?

The primary users of this solution are Software Developers who are responsible for API development and testing. Additionally, Business stakeholders involved in defining technical and business requirements will benefit from this solution.

Use Case / Business Case:

  • The primary goal of this project is to revolutionize API testing in the banking sector by leveraging Generative AI.
  • Participants are encouraged to come up with innovative use cases that demonstrate the effectiveness of GenAI in automating test case generation and debugging processes.

Use Case:

  • The solution aims to significantly reduce the time and effort required for API testing, allowing developers to focus more on product development.
  • By automating test case generation and debugging, the solution ensures comprehensive test coverage and identifies bugs efficiently, thereby improving overall product quality.
  • Business stakeholders benefit from accurate and comprehensive testing, ensuring that business requirements are met and reflected in the final product.

Business Case:

By implementing this solution, banks and financial institutions can:

  • Reduce the time and cost associated with manual testing processes.
  • Improve the efficiency and accuracy of API testing, leading to higher product quality.
  • Enhance collaboration between development and business teams, ensuring that business requirements are accurately reflected in the testing process.

Overall, the use case for this solution revolves around enhancing efficiency, reducing manual effort, and improving the quality of API testing in the banking sector. Participants are encouraged to explore use cases that demonstrate these benefits effectively.

Expected Outcome:

  1. Develop a framework/program capable of generating test code from technical and business descriptions, automating the tedious process of manual test case creation. This framework should intelligently analyze both technical and business-like specifications to ensure comprehensive test coverage.
  2. Utilize Generative AI to identify bugs efficiently and effectively. The final product should not only identify bugs but also provide helpful error messages or hints, aiding developers in the debugging process.
  3. Optionally, implement a feature for root cause analysis, which identifies the specific API or application responsible for errors. Pinpointing the exact API or application involved in the error will help simplifying the debugging process and improve overall testing efficiency.

Presentation Prototype:

  • Live demonstration with the Challenge dataset Key elements:
  • Source Code
  • Pitfalls, Learnings and best way to work with GenAI

The Pitch:

Pitchdeck

Deep Dive Slides:

Deep Dive Slides

Resources:

To effectively solve this case, we will provide you with a Micro-Bank and Data:

  • Open-source code and data for testing purposes are available:
    • in the submodule ./source.
    • in the repository @postfinance/swiss-hacks.
    • This includes related OpenAPI specifications and user stories. The Microservice is runnable, serving as a test object.
  • A list of users with bank accounts is provided for validation and testing purposes.
  • Interfaces of Microservice: Synchronous HTTP/s.
  • For the final solution and presentation, the same data will be used with minor alterations to test the efficacy of GenAI in handling unknown documents or services.

These datasets and resources can be used in developing, testing, and presenting your solution.

To ensure the successful completion of this project, the following technologies are vital:

  • Microservice Development: Java / Quarkus will be used to develop the microservice.
  • Generative AI (GenAI): We provide you with a licensed ChatGPT 4 API key. Option to train your own model with the choice of model being free.
  • Version Control and Collaboration: Git for version control.
  • Internet Access: Stable internet access is essential for accessing required resources and for collaboration.
  • Publishing Easy-to-use publishing tools for quick deployment and testing.
  • Docker (or similar virtualization): Docker will be utilized for virtualization, making development and deployment more efficient.
  • Additional: Chocolate (optional, but highly recommended for productivity enhancement 😉).

Judging Criteria:

Please include judging criteria % here (empty in Case Development Framework)

  • Creativity (XX%) - How good the model or the design of the model is?
  • Feasibility (XX%) - How flexible in which other use cases can the model be used?
  • Technology (XX%) - How many bugs could be identified? How good a code is from the generator?

Point of Contact:

  • Christian Sager (project representative)
  • Ramona Koksa (Jury Main Member 1)
  • Kevin Rauner (Jury Member 2)
  • Timon Borter (Nerd)

Prize:

  • Giveaway bags
  • An invitation to DevDays at PostFinance in December to present
Preview of external content.