Skip to content

scaling questions #137

@sameermahajan

Description

@sameermahajan

Currently I notice that for every query to Open AI you send all the pre defined contexts along with the query text to get the results. I think the Open AI API is stateless and that is why you need to send the entire context every time.

In that case how would this solution scale when this system evolves or in context of other cases (I have some in mind that I am currently exploring) where there might be thousands (if not even more) and even complex predefined contexts? Will it have to send all of them every time to get desired results? Are there plans of keeping some state in the cloud, may be with some kind of session id or cookie etc. or some other mechanism so that you don't have to send the entire context every time?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions