In order to use Google Cloud Speech with Integromat, it is necessary to have a Google Cloud Platform account. If you do not have one, you can create an account on the Google Cloud Speech-to-text website.
To connect Google Cloud Speech to Integromat you must connect your Google Cloud Platform account to Integromat. To do so, you must provide the Client ID and Client Secret.
1. Open the Google Cloud Console.
2. Create a new project.
3. Go to the APIs & Services > Credentials and Configure the OAuth consent screen.
4. Click the Add Scope button to add a scope.
5. Add the Cloud Speech-to-Text API (https://www.googleapis.com/auth/cloud-platform) in the Add scope dialog.
integromat.com to the Authorized domains field.
7. Click on the Save button to save the OAuth consent screen dialog.
8. Create the OAuth client ID. Use
https://www.integromat.com/oauth/cb/google-cloud-speech as a redirect URI.
9. Now, you can copy the Client ID and the Client Secret from the following dialog.
1. Open the Create a connection dialog.
2. Insert the Client ID and Client Secret.
3. Grant access to your Google Account.
In order to start using the Google Cloud Speech module, it is necessary to enable the Cloud Speech to Text service.
Transcribes long audio files (longer than 1 minute) to text using asynchronous speech recognition. The name of the recognized file is provided. The module Google Cloud Speech>Operation: Get is then needed to retrieve the recognized text.
|Connection||Establish a connection to your Google Cloud account.|
|Source file||Map the audio file you want to convert to text. If left empty the File URI must be provided. This field is mandatory.|
|File URI||URI that points to the file that contains audio data. The file must not be compressed (for example, gzip). Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format:
|Audio Channels Count||
Enter the number of the audio file channels. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16 and FLAC are
The module only recognizes the first channel by default. To perform independent recognition on each channel, enable the Enable separate recognition per channel option.
|Enable separate recognition per channel||Enable this option and set the Audio Channels Count to more than 1 to get each channel recognized separately. The recognition result will contain a channelTag field to state which channel that result belongs to. If this option is disabled, the module will only recognize the first channel.
The request is also billed cumulatively for all channels recognized: (Audio Channels Count times the audio length)
|Language Code (BCP-47)||Enter the language code. The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes. You can use BCP-47 validator. This field is mandatory.|
|Additional language tags||Add more language codes if needed. See Language Support for a list of the currently supported language codes. If alternative languages are listed, recognition result will contain recognition in the most likely language detected including the main Language Code. The recognition result will include the language tag of the language detected in the audio.
This feature is only supported for Voice Command and Voice Search use cases and performance may vary for other use cases (e.g., phone call transcription).
Select the encoding of the audio file/data. For best results, the audio source should be captured and transmitted using a lossless encoding (
|Sample rate in Hertz||Enter the sample rate in Hertz of the audio data. Valid values are 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that's not possible, use the native sample rate of the audio source (instead of re-sampling). This field is optional for
|Number of alternatives||The maximum number of recognition hypotheses to be returned. Valid values are
|Profanity filter||If this option is enabled, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If this option is disabled, profanities won't be filtered out. This field is optional.|
|Array of SpeechContexts||Enter "hints" to speech recognizer to favor specific words and phrases in the results. A list of strings containing word and phrase "hints" allows the speech recognition to more likely recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.|
|Enable word time offsets||If this option is enabled, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If this option is disabled, no word-level time offset information is returned. The option is disabled by default. This field is optional.|
|Enable word confidence||If this option is enabled, the top result includes a list of words and the confidence for those words. If this option is disabled, no word-level confidence information is returned. The option is disabled by default. This field is optional.|
|Enable automatic punctuation||If this option is enabled, it adds punctuation to recognition result hypotheses. This feature is only available in selected languages. Setting this for requests in other languages has no effect at all. The option is disabled by default. This field is optional.
This is currently offered as an experimental service, complimentary to all users. In the future, this may be exclusively available as a premium feature.
|Enable speaker diarization||This option enables speaker detection for each recognized word in the top alternative of the recognition result using a speakerTag provided in the WordInfo.|
|Diarization speaker count||Enter the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless Enable speaker diarization is enabled.|
Description of audio data to be recognized.
Select the model best suited to your domain to get best results.
Enable this option to use an enhanced model for speech recognition. You must also set the
You must opt-in to the audio logging using the instructions in the data logging documentation.
If you enable this option and you have not enabled audio logging, then you will receive an error.
Retrieves the latest state or the result of a long-running operation. You can then use the result (text) in the following modules of your choice.
|Connection||Establish a connection to your Google Cloud account.|
Enter the name of the operation. The name can be retrieved using the Speech: Long Running Recognize module.
This module returns the recognized text for short audio (less than ~1 minute). To process a speech recognition request for long audio, use the Speech: Long Running Recognize module.
The module Speech: Recognize contains the same options as the Speech: Long Running Recognize module.
The only difference is, that the recognition is done immediately. You do not need to use the Operations: Get module.