Skip to content

server.py Errors when running on docker #2

@nachiketh89

Description

@nachiketh89

Respected Scholars,

I am Nachiketh , I have worked as a software enginner in SYNOPSYS for 9 years and have a BE from VTU .

I would like to thank you for the AWESOME CONTRIBUTION to the indian languages you have made .

I have an Issue when, I try reproducing with docker :-

I have spoken about the issue below with the AIBHARAT Team and they are saying that issue is with the EKSTEP part of the work.

I am trying to reproduce the steps you have mentioned in the GitHub - Open-Speech-EkStep/speech-recognition-open-api website: -

My deployed model's directory structure is: -

D1BFED3BFE4348179D8BDDD69DBDB735

Scenario 1 :-

gpu=False

I am running the following command:-


docker run -itd -p 50051:50051 --env gpu=False --env languages=['en','hi'] -v C:\Users\nachi\Downloads\EKSTEP\speech-recognition-open-api\deployed_models:/opt/speech_recognition_open_api/deployed_models/ gcr.io/ekstepspeechrecognition/speech_recognition_model_api:3.2.37


In the Windows 11 docker log, I see the following: -

I have submitted, below command on Win11 cmd in administrator mode , and I see the below errors on the docker log :-


docker run -itd -p 50051:50051 --env gpu=True --env languages=['en','hi'] --gpus all -v C:\Users\nachi\Downloads\EKSTEP\speech-recognition-open-api\deployed_models:/opt/speech_recognition_open_api/deployed_models/ gcr.io/ekstepspeechrecognition/speech_recognition_model_api:3.2.37 "


=========
2023-09-14 00:11:20 [NeMo W 2023-09-13 18:41:20 optimizers:46] Apex was not found. Using the lamb optimizer will error out.
2023-09-14 00:11:23 2023-09-13 18:41:23,658 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(56) - INFO - User has provided gpu as True gpu_present True
2023-09-14 00:11:23 2023-09-13 18:41:23,659 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(61) - INFO - ### GPU Utilization ###
2023-09-14 00:11:23 | ID | GPU | MEM |
2023-09-14 00:11:23 ------------------
2023-09-14 00:11:23 | 0 | 0% | 17% |
2023-09-14 00:11:23 2023-09-13 18:41:23,839 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(67) - INFO - available GPUs ['0'], all GPUs [0], excluded GPUs [0]
2023-09-14 00:11:23 2023-09-13 18:41:23,906 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(75) - INFO - Selected GPUs: [] requested GPUs [0]
2023-09-14 00:11:23 2023-09-13 18:41:23,907 — [MainThread] - src.lib.inference_lib - inference_lib.py.get_cuda_device(82) - INFO - selected gpu index: None selecting device: cuda
2023-09-14 00:11:23 Using server workers: 10
2023-09-14 00:11:23 2023-09-13 18:41:23,966 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(29) - INFO - Initializing realtime and batch inference service
2023-09-14 00:11:23 2023-09-13 18:41:23,966 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(38) - INFO - User has provided gpu as True
2023-09-14 00:11:23 2023-09-13 18:41:23,967 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(41) - INFO - GPU available on machine True
2023-09-14 00:11:23 2023-09-13 18:41:23,968 — [MainThread] - src.speech_recognition_service - speech_recognition_service.py.init(44) - INFO - Loading models from /opt/speech_recognition_open_api/deployed_models/ with gpu value: True
2023-09-14 00:11:23 2023-09-13 18:41:23,968 — [MainThread] - src.model_service - model_service.py.init(35) - INFO - environment requested languages ['en','hi']
2023-09-14 00:11:23 Traceback (most recent call last):
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/server.py", line 24, in
2023-09-14 00:11:23 run()
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/server.py", line 17, in run
2023-09-14 00:11:23 add_SpeechRecognizerServicer_to_server(SpeechRecognizer(), server)
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/src/speech_recognition_service.py", line 45, in init
2023-09-14 00:11:23 self.model_service = ModelService(self.MODEL_BASE_PATH, 'kenlm', gpu, gpu)
2023-09-14 00:11:23 File "/opt/speech_recognition_open_api/src/model_service.py", line 39, in init
2023-09-14 00:11:23 model_config = json.load(f)
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/init.py", line 293, in load
2023-09-14 00:11:23 return loads(fp.read(),
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/init.py", line 357, in loads
2023-09-14 00:11:23 return _default_decoder.decode(s)
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
2023-09-14 00:11:23 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2023-09-14 00:11:23 File "/usr/lib/python3.8/json/decoder.py", line 353, in raw_decode
2023-09-14 00:11:23 obj, end = self.scan_once(s, idx)
2023-09-14 00:11:23 json.decoder.JSONDecodeError: Invalid \escape: line 3 column 20 (char 33)

=========

Any clue to the issue with server.py will help me greatly in getting your system up and running.

My WhatsApp number is +91-8861636108 , if you can share you WhatsApp number or mobile numbers, it would greatly help to get around this issue.
My email id is nachiketh89@gmail.com

Pls help me , Sir ,, I have got stuck ! for 2 days trying to debug this issue .

Thanking You ,
Nachiketh

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions