.Guarantee being compatible along with multiple frameworks, including.NET 6.0,. NET Framework 4.6.2, and.NET Specification 2.0 and above.Reduce addictions to avoid version problems and also the requirement for tiing redirects.Translating Sound Data.Some of the major capabilities of the SDK is audio transcription. Creators may transcribe audio data asynchronously or even in real-time. Below is an instance of how to translate an audio documents:.utilizing AssemblyAI.using AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For regional documents, similar code can be used to attain transcription.await using var stream = brand new FileStream("./ nbc.mp3", FileMode.Open).var records = await client.Transcripts.TranscribeAsync(.stream,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK additionally reinforces real-time audio transcription utilizing Streaming Speech-to-Text. This component is especially beneficial for applications requiring quick handling of audio information.making use of AssemblyAI.Realtime.wait for utilizing var scribe = brand-new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Last: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for obtaining sound from a microphone for example.GetAudio( async (portion) => await transcriber.SendAudioAsync( chunk)).await transcriber.CloseAsync().Using LeMUR for LLM Apps.The SDK includes with LeMUR to make it possible for programmers to construct huge foreign language version (LLM) applications on vocal records. Right here is an example:.var lemurTaskParams = brand-new LemurTaskParams.Trigger="Deliver a brief rundown of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var action = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Cleverness Models.Also, the SDK possesses built-in help for audio knowledge versions, allowing feeling analysis as well as various other innovative features.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more details, visit the formal AssemblyAI blog.Image source: Shutterstock.