From e0975ab30abcd2f74e92f258bf5b1916607bb66c Mon Sep 17 00:00:00 2001 From: zyh Date: Wed, 23 Oct 2024 17:28:49 +0800 Subject: [PATCH 1/2] feature:add OceanBase vector database --- ...ng_started_with_OceanBase_and_OpenAI.ipynb | 502 ++++++++++++++++++ 1 file changed, 502 insertions(+) create mode 100644 examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb diff --git a/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb b/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb new file mode 100644 index 0000000000..80660cf2ff --- /dev/null +++ b/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb @@ -0,0 +1,502 @@ +{ + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Using OCEANBASE V4.3.3 as a vector database for OpenAI embeddings\n", + "\n", + "This notebook guides you step by step on using OCEANBASE-PG as a vector database for OpenAI embeddings.\n", + "\n", + "This notebook presents an end-to-end process of:\n", + "1. Using precomputed embeddings created by OpenAI API.\n", + "2. Storing the embeddings in a cloud instance of OCEANBASE-PG.\n", + "3. Converting raw text query to an embedding with OpenAI API.\n", + "4. Using OCEANBASE-PG to perform the nearest neighbour search in the created collection.\n", + "\n", + "### What is OCEANBASE V4.3.3\n", + "\n", + "[OCEANBASE](https://www.oceanbase.com/) OceanBase V4.3.3 is the first GA release of the 4.3 series, bringing several key breakthroughs. First, on the basis of relational databases, vector types and indexes suitable for AI analysis and processing are supported. Secondly, in order to better meet the requirements of strong isolation of TP and AP resources in HTAP mixed load scenarios, a new form of column-stored replica is introduced. In addition, this release also achieves significant performance improvements for AP query tasks. Other new features include support for complex data types in Array, optimized the computing performance of Roaringbitmap, enhanced the refresh capability of materialized views, expanded the function of foreign tables, improved the performance of foreign table import, and optimized the plan generation and execution policies of AP class SQL to accelerate the improvement of capabilities for OLAP workloads. At the same time, most of the features of V4.2.4 and earlier versions have been supported in V4.3.3, and a new converged version will be released in the future, which is also applicable to OLTP services.\n", + "\n", + "\n", + "### Deployment options\n", + "\n", + "- Using [OCEANBASE V4.3.3 Vector Database](https://www.oceanbase.com/docs/common-oceanbase-database-cn-1000000001428560) to fast deploy it." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "For the purposes of this exercise we need to prepare a couple of things:\n", + "\n", + "1. OCEANBASE cloud server instance.\n", + "2. The 'mysql' library to interact with the vector database. Any other oceanbase client library is ok.\n", + "3. An [OpenAI API key](https://beta.openai.com/account/api-keys)." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We might validate if the server was launched successfully by running a simple curl command:" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Install requirements\n", + "\n", + "This notebook obviously requires the `openai` and `mysql` packages, but there are also some other additional libraries we will use. The following command installs them all:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "! pip install openai mysql pandas wget" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Prepare your OpenAI API key\n", + "The OpenAI API key is used for vectorization of the documents and queries.\n", + "\n", + "If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys.\n", + "\n", + "Once you get your key, please add it to your environment variables as OPENAI_API_KEY.\n", + "\n", + "If you have any doubts about setting the API key through environment variables, please refer to [Best Practices for API Key Safety](https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety)." + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OPENAI_API_KEY is ready\n" + ] + } + ], + "source": [ + "# Test that your OpenAI API key is correctly set as an environment variable\n", + "# Note. if you run this notebook locally, you will need to reload your terminal and the notebook for the env variables to be live.\n", + "\n", + "if os.getenv(\"OPENAI_API_KEY\") is not None:\n", + " print(\"OPENAI_API_KEY is ready\")\n", + "else:\n", + " print(\"OPENAI_API_KEY environment variable not found\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Connect to OCEANBASE\n", + "First add it to your environment variables. or you can just change the \"mysql.connect\" parameters below\n", + "\n", + "Connecting to a running instance of OCEANBASE server is easy with the official Python library:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import mysql.connector\n", + "\n", + "# Note. alternatively you can set a temporary env variable like this:\n", + "# os.environ[\"OCEANBASE_HOST\"] = \"your_host\"\n", + "# os.environ[\"OCEANBASE_PORT\"] = \"3306\"\n", + "# os.environ[\"OCEANBASE_DATABASE\"] = \"your_database\"\n", + "# os.environ[\"OCEANBASE_USER\"] = \"your_user\"\n", + "# os.environ[\"OCEANBASE_PASSWORD\"] = \"your_password\"\n", + "\n", + "connection = mysql.connector.connect(\n", + " host=os.environ.get(\"OCEANBASE_HOST\", \"localhost\"),\n", + " port=os.environ.get(\"OCEANBASE_PORT\", \"2881\"),\n", + " database=os.environ.get(\"OCEANBASE_DATABASE\", \"your_database\"),\n", + " user=os.environ.get(\"OCEANBASE_USER\", \"your_user\"),\n", + " password=os.environ.get(\"OCEANBASE_PASSWORD\", \"your_password\")\n", + ")\n", + "\n", + "# Create a new cursor object\n", + "cursor = connection.cursor()\n", + "\n", + "# Make sure to close the connection and cursor after use\n", + "# cursor.close()\n", + "# connection.close()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can test the connection by running any available method:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Connection successful!\n" + ] + } + ], + "source": [ + "# Execute a simple query to test the connection\n", + "cursor.execute(\"SELECT 1;\")\n", + "result = cursor.fetchone()\n", + "\n", + "# Check the query result\n", + "if result == (1,):\n", + " print(\"Connection successful!\")\n", + "else:\n", + " print(\"Connection failed.\")" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'vector_database_wikipedia_articles_embedded.zip'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import wget\n", + "\n", + "embeddings_url = \"https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip\"\n", + "\n", + "# The file is ~700 MB so this will take some time\n", + "wget.download(embeddings_url)" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The downloaded file has to be then extracted:" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The file vector_database_wikipedia_articles_embedded.csv exists in the data directory.\n" + ] + } + ], + "source": [ + "import zipfile\n", + "import os\n", + "import re\n", + "import tempfile\n", + "\n", + "current_directory = os.getcwd()\n", + "zip_file_path = os.path.join(current_directory, \"vector_database_wikipedia_articles_embedded.zip\")\n", + "output_directory = os.path.join(current_directory, \"../../data\")\n", + "\n", + "with zipfile.ZipFile(zip_file_path, \"r\") as zip_ref:\n", + " zip_ref.extractall(output_directory)\n", + "\n", + "\n", + "# check the csv file exist\n", + "file_name = \"vector_database_wikipedia_articles_embedded.csv\"\n", + "data_directory = os.path.join(current_directory, \"../../data\")\n", + "file_path = os.path.join(data_directory, file_name)\n", + "\n", + "\n", + "if os.path.exists(file_path):\n", + " print(f\"The file {file_name} exists in the data directory.\")\n", + "else:\n", + " print(f\"The file {file_name} does not exist in the data directory.\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Index data\n", + "\n", + "OCEANBASE stores data in __relation__ where each object is described by at least one vector. Our relation will be called **articles** and each object will be described by both **title** and **content** vectors. \n", + "\n", + "We will start with creating a relation and create a vector index on both **title** and **content**, and then we will fill it with our precomputed embeddings." + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "create_table_sql = '''\n", + "CREATE TABLE IF NOT EXISTS articles (\n", + " id INTEGER NOT NULL,\n", + " url TEXT,\n", + " title TEXT,\n", + " content TEXT,\n", + " title_vector vector(1536),\n", + " content_vector vector(1536),\n", + " vector_id INTEGER,\n", + " PRIMARY KEY (`id`),\n", + " VECTOR INDEX `content_vector` (`content_vector`) WITH (distance=L2, type=HNSW),\n", + " VECTOR INDEX `title_vector` (`title_vector`) WITH (distance=L2, type=HNSW)\n", + ");\n", + "\n", + "'''\n", + "\n", + "# Execute the SQL statements\n", + "cursor.execute(create_table_sql)\n", + "\n", + "# Commit the changes\n", + "connection.commit()" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Load data\n", + "\n", + "In this section we are going to load the data prepared previous to this session, so you don't have to recompute the embeddings of Wikipedia articles with your own credits." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import io\n", + "\n", + "# Path to your local CSV file\n", + "csv_file_path = '../../data/vector_database_wikipedia_articles_embedded.csv'\n", + "\n", + "# Define a generator function to process the file line by line\n", + "def process_file(file_path):\n", + " with open(file_path, 'r') as file:\n", + " for line in file:\n", + " yield line\n", + "\n", + "# Create a StringIO object to store the modified lines\n", + "modified_lines = io.StringIO(''.join(list(process_file(csv_file_path))))\n", + "\n", + "# Create the COPY command for the copy_expert method\n", + "copy_command = '''\n", + "COPY articles (id, url, title, content, title_vector, content_vector, vector_id)\n", + "FROM STDIN WITH (FORMAT CSV, HEADER true, DELIMITER ',');\n", + "'''\n", + "\n", + "# Execute the COPY command using the copy_expert method\n", + "cursor.copy_expert(copy_command, modified_lines)\n", + "\n", + "# Commit the changes\n", + "connection.commit()" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Count:25000\n" + ] + } + ], + "source": [ + "# Check the collection size to make sure all the points have been stored\n", + "count_sql = \"\"\"select count(*) from articles;\"\"\"\n", + "cursor.execute(count_sql)\n", + "result = cursor.fetchone()\n", + "print(f\"Count:{result[0]}\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Search data\n", + "\n", + "Once the data is put into Qdrant we will start querying the collection for the closest vectors. We may provide an additional parameter `vector_name` to switch from title to content based search. Since the precomputed embeddings were created with `text-embedding-3-small` OpenAI model we also have to use it during search." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [], + "source": [ + "def query_oceanbase(query, collection_name, vector_name=\"title_vector\", top_k=20):\n", + "\n", + " # Creates embedding vector from user query\n", + " embedded_query = openai.Embedding.create(\n", + " input=query,\n", + " model=\"text-embedding-3-small\",\n", + " )[\"data\"][0][\"embedding\"]\n", + "\n", + " # Convert the embedded_query to PostgreSQL compatible format\n", + " embedded_query_pg = \"[\" + \",\".join(map(str, embedded_query)) + \"]\"\n", + "\n", + " # Create SQL query\n", + " query_sql = f\"\"\"\n", + " SELECT id, url, title, l2_distance({vector_name},'{embedded_query_pg}'::VECTOR(1536)) AS similarity\n", + " FROM {collection_name}\n", + " ORDER BY {vector_name} <-> '{embedded_query_pg}'::VECTOR(1536)\n", + " LIMIT {top_k};\n", + " \"\"\"\n", + " # Execute the query\n", + " cursor.execute(query_sql)\n", + " results = cursor.fetchall()\n", + "\n", + " return results" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1. Museum of Modern Art (Score: 0.5)\n", + "2. Western Europe (Score: 0.485)\n", + "3. Renaissance art (Score: 0.479)\n", + "4. Pop art (Score: 0.472)\n", + "5. Northern Europe (Score: 0.461)\n", + "6. Hellenistic art (Score: 0.457)\n", + "7. Modernist literature (Score: 0.447)\n", + "8. Art film (Score: 0.44)\n", + "9. Central Europe (Score: 0.439)\n", + "10. European (Score: 0.437)\n", + "11. Art (Score: 0.437)\n", + "12. Byzantine art (Score: 0.436)\n", + "13. Postmodernism (Score: 0.434)\n", + "14. Eastern Europe (Score: 0.433)\n", + "15. Europe (Score: 0.433)\n", + "16. Cubism (Score: 0.432)\n", + "17. Impressionism (Score: 0.432)\n", + "18. Bauhaus (Score: 0.431)\n", + "19. Surrealism (Score: 0.429)\n", + "20. Expressionism (Score: 0.429)\n" + ] + } + ], + "source": [ + "import openai\n", + "\n", + "query_results = query_OCEANBASE(\"modern art in Europe\", \"Articles\")\n", + "for i, result in enumerate(query_results):\n", + " print(f\"{i + 1}. {result[2]} (Score: {round(1 - result[3], 3)})\")" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "1. Battle of Bannockburn (Score: 0.489)\n", + "2. Wars of Scottish Independence (Score: 0.474)\n", + "3. 1651 (Score: 0.457)\n", + "4. First War of Scottish Independence (Score: 0.452)\n", + "5. Robert I of Scotland (Score: 0.445)\n", + "6. 841 (Score: 0.441)\n", + "7. 1716 (Score: 0.441)\n", + "8. 1314 (Score: 0.429)\n", + "9. 1263 (Score: 0.428)\n", + "10. William Wallace (Score: 0.426)\n", + "11. Stirling (Score: 0.419)\n", + "12. 1306 (Score: 0.419)\n", + "13. 1746 (Score: 0.418)\n", + "14. 1040s (Score: 0.414)\n", + "15. 1106 (Score: 0.412)\n", + "16. 1304 (Score: 0.411)\n", + "17. David II of Scotland (Score: 0.408)\n", + "18. Braveheart (Score: 0.407)\n", + "19. 1124 (Score: 0.406)\n", + "20. July 27 (Score: 0.405)\n" + ] + } + ], + "source": [ + "# This time we'll query using content vector\n", + "query_results = query_OCEANBASE(\"Famous battles in Scottish history\", \"Articles\", \"content_vector\")\n", + "for i, result in enumerate(query_results):\n", + " print(f\"{i + 1}. {result[2]} (Score: {round(1 - result[3], 3)})\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.4" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} From 028094206460f6be6e4a60c9ef554302ee8275ba Mon Sep 17 00:00:00 2001 From: zyh Date: Thu, 24 Oct 2024 11:40:13 +0800 Subject: [PATCH 2/2] feature:add OceanBase vector database --- .../Getting_started_with_OceanBase_and_OpenAI.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb b/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb index 80660cf2ff..b5b0853587 100644 --- a/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb +++ b/examples/vector_databases/oceanbase/Getting_started_with_OceanBase_and_OpenAI.ipynb @@ -7,13 +7,13 @@ "source": [ "# Using OCEANBASE V4.3.3 as a vector database for OpenAI embeddings\n", "\n", - "This notebook guides you step by step on using OCEANBASE-PG as a vector database for OpenAI embeddings.\n", + "This notebook guides you step by step on using OCEANBASE as a vector database for OpenAI embeddings.\n", "\n", "This notebook presents an end-to-end process of:\n", "1. Using precomputed embeddings created by OpenAI API.\n", - "2. Storing the embeddings in a cloud instance of OCEANBASE-PG.\n", + "2. Storing the embeddings in a cloud instance of OCEANBASE.\n", "3. Converting raw text query to an embedding with OpenAI API.\n", - "4. Using OCEANBASE-PG to perform the nearest neighbour search in the created collection.\n", + "4. Using OCEANBASE to perform the nearest neighbour search in the created collection.\n", "\n", "### What is OCEANBASE V4.3.3\n", "\n",