Files
label_ai_service/app/models/qa_models.py
wh 4211e587ee feat(US5+6): QA generation — POST /api/v1/qa/gen-text and /gen-image
- Add qa_models.py with TextQAItem, GenTextQARequest, QAPair, ImageQAItem,
  GenImageQARequest, ImageQAPair, TextQAResponse, ImageQAResponse
- Implement gen_text_qa(): batch-formats triples into a single prompt, calls
  llm.chat(), parses JSON array via extract_json()
- Implement gen_image_qa(): downloads cropped image from source-data bucket,
  base64-encodes inline (data URI), builds multimodal message, calls
  llm.chat_vision(), parses JSON; image_path preserved on ImageQAPair
- Replace qa.py stub with full router: POST /qa/gen-text and /qa/gen-image
  using Depends(get_llm_client) and Depends(get_storage_client)
- 15 new tests (8 service + 7 router), 53/53 total passing
2026-04-10 16:05:49 +08:00

48 lines
822 B
Python

from pydantic import BaseModel
class TextQAItem(BaseModel):
subject: str
predicate: str
object: str
source_snippet: str
class GenTextQARequest(BaseModel):
items: list[TextQAItem]
model: str | None = None
prompt_template: str | None = None
class QAPair(BaseModel):
question: str
answer: str
class ImageQAItem(BaseModel):
subject: str
predicate: str
object: str
qualifier: str | None = None
cropped_image_path: str
class GenImageQARequest(BaseModel):
items: list[ImageQAItem]
model: str | None = None
prompt_template: str | None = None
class ImageQAPair(BaseModel):
question: str
answer: str
image_path: str
class TextQAResponse(BaseModel):
pairs: list[QAPair]
class ImageQAResponse(BaseModel):
pairs: list[ImageQAPair]