Skip to content

OpenAI

OpenAI's Developer Conference: A New Era of AI Innovation

OpenAI DevDay

GPT-4 Turbo with 128K context: Breaking Boundaries in Language Modeling

OpenAI announced GPT4-Turbo at its November Developer Conference, a new language model that builds on the success of GPT-3. This model is designed to break boundaries in language modeling, offering increased context length, more control, better knowledge, new modalities, customization, and higher rate limits. As shown, GPT-4 Turbo offers a significant increase in the number of tokens it can handle in its context length, going from 8,000 tokens to 128,000 tokens. This represents a substantial enhancement in the model's ability to maintain context over longer conversations or documents. Compared to the standard GPT-4, this is a huge leap forward in terms of the amount of information that can be processed by the model.

The new model also offers more control, specifically in terms of model inputs and outputs, and better knowledge, which includes updating the cut-off date for knowledge about the world to April 2023 and providing the ability for developers to easily add their own knowledge base. New modalities, such as DALL-E 3, Vision, and TTS (text-to-speech) will all be included in the API, with a new version of Whisper speech recognition coming. Customization, including fine-tuning and custom models (which, Altman warned, won’t be cheap), and higher rate limits are also included in the new model, making it a comprehensive upgrade over its predecessors.



Multimodal Capabilities: Expanding AI's Horizon

GPT-4 Turbo with vision

GPT-4 now integrates vision, allowing it to understand and analyze images, enhancing its capabilities beyond text. Developers can utilize this feature through the gpt-4-vision-preview model. It supports a range of applications, including caption generation and detailed image analysis, beneficial for services like BeMyEyes, which aids visually impaired individuals. The vision feature will soon be included in GPT-4's stable release. Costs vary by image size; for example, a 1080×1080 image analysis costs approximately $0.00765. For more details, OpenAI provides a comprehensive vision guide and DALL·E 3 remains the tool for image generation.

GPT-4 Turbo with vision analyzing Old School RuneScape through the RuneLite interface GPT-4V(ision) analyzing Old School RuneScape through the RuneLite interface
import base64
import logging
import os
import time
from PIL import ImageGrab, Image
import pyautogui as gui
import pygetwindow as gw
import requests

# Set up logging to capture events when script runs and any possible errors.

log_filename = 'rune_capture.log' # Replace with your desired log file name
logging.basicConfig(
filename=log_filename,
filemode='a',
level=logging.INFO,
format='%(asctime)s - %(name)s - [%(levelname)s] [%(pathname)s:%(lineno)d] - %(message)s - [%(process)d:%(thread)d]'
)
logger = logging.getLogger(**name**)

# Set the client window title.

client_window_title = "RuneLite"

def capture_screenshot():
try: # Get the title of the client window.
win = gw.getWindowsWithTitle(client_window_title)[0]
win.activate()
time.sleep(1)

        # Get the client window's position.
        clientWindow = gw.getWindowsWithTitle(client_window_title)[0]
        x1, y1 = clientWindow.topleft
        x2, y2 = clientWindow.bottomright

        # Define the screenshot path and crop area.
        path = "gameWindow.png"
        gui.screenshot(path)
        img = Image.open(path)
        img = img.crop((x1 + 1, y1 + 40, x2 - 250, y2 - 165))
        img.save(path)
        return path

    except Exception as e:
        logger.error(f"An error occurred while capturing screenshot: {e}")
        raise

def encode_image(image_path):
try:
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
except Exception as e:
logger.error(f"An error occurred while encoding image: {e}")
raise

def send_image_to_api(base64_image):
api_key = os.getenv("OPENAI_API_KEY")
headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"}

    payload = {
        "model": "gpt-4-vision-preview",
        "messages": [
            {"role": "user", "content": [{"type": "text", "text": "What’s in this image?"}, {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}]},
        ],
        "max_tokens": 300,
    }

    try:
        response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
        response.raise_for_status()  # Will raise an exception for HTTP errors.
        return response.json()

    except Exception as e:
        logger.error(f"An error occurred while sending image to API: {e}")
        raise

if **name** == "**main**":
  try: # Perform the main operations.
      screenshot_path = capture_screenshot()
      base64_image = encode_image(screenshot_path)
      api_response = send_image_to_api(base64_image)
      print(api_response)
  except Exception as e:
      logger.error(f"An error occurred in the main function: {e}")
DALL·E 3

Developers can now access DALL·E 3, a multimodal model that generates images from text directly through the API by specifying dall-e-3 as the model.

TTS (Text-to-Speech)

OpenAI's newest model is available to generate human-quality speech from text via their text-to-speech API.

Revenue-Sharing GPT Store: Empowering Creators

The DevDay also cast a spotlight on the newly announced revenue-sharing GPT Store. This platform represents a strategic move towards a more inclusive creator economy within AI, offering compensation to creators of AI applications based on user engagement and usage. This initiative is a nod to the growing importance of content creators in the AI ecosystem and reflects a broader trend of recognizing and rewarding the contributions of individual developers and innovators.

Microsoft Partnership and Azure's Role

The ongoing collaboration with Microsoft was highlighted, with a focus on how Azure's infrastructure is being optimized to support OpenAI's sophisticated AI models. This partnership is a testament to the shared goal of accelerating AI innovation and enhancing integration across various services and platforms as well as Microsoft's heavy investment in AI.

Safe and Gradual AI Integration

OpenAI emphasized a strategic approach to AI integration, advocating for a balance between innovation and safety. The organization invites developers to engage with the new tools thoughtfully, ensuring a responsible progression of AI within different sectors. This measured approach is a reflection of OpenAI's commitment to the safe and ethical development of AI technologies.

Conclusion

The Developer Conference marked a notable milestone for OpenAI and the broader AI community. The launch of GPT4-Turbo and the introduction of new multimodal capabilities, combined with the support of Microsoft's Azure and the innovative revenue-sharing model of the GPT Store, heralds a new phase of growth and experimentation in AI applications.