Title: | A Coding Assistant using Perplexity's Large Language Models |
---|---|
Description: | A coding assistant using Perplexity's Large Language Models <https://www.perplexity.ai/> API. A set of functions and 'RStudio' add-ins that aim to help R developers. |
Authors: | Gabriel Kaiser [aut, cre] |
Maintainer: | Gabriel Kaiser <[email protected]> |
License: | GPL (>= 3) |
Version: | 0.0.3 |
Built: | 2024-11-25 03:53:15 UTC |
Source: | https://github.com/gabrielkaiserqfin/perplexr |
A coding assistant using Perplexity's Large Language Models https://www.perplexity.ai/ API. A set of functions and 'RStudio' add-ins that aim to help R developers.
Maintainer: Gabriel Kaiser [email protected]
Useful links:
Report bugs at https://github.com/GabrielKaiserQFin/perplexR/issues
Large Language Model: Annotate code
annotateCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
annotateCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be commented by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: annotateCode("z <- function(x) scale(x)^2") ## End(Not run)
## Not run: annotateCode("z <- function(x) scale(x)^2") ## End(Not run)
Get Large Language Model Completions Endpoint
API_Request( prompt, PERPLEXITY_API_KEY = PERPLEXITY_API_KEY, modelSelection = modelSelection, systemRole = systemRole, maxTokens = maxTokens, temperature = temperature, top_p = top_p, top_k = top_k, presence_penalty = presence_penalty, frequency_penalty = frequency_penalty, proxy = proxy )
API_Request( prompt, PERPLEXITY_API_KEY = PERPLEXITY_API_KEY, modelSelection = modelSelection, systemRole = systemRole, maxTokens = maxTokens, temperature = temperature, top_p = top_p, top_k = top_k, presence_penalty = presence_penalty, frequency_penalty = frequency_penalty, proxy = proxy )
prompt |
The prompt to generate completions for. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
Note: See also clearChatSession
.
AskMe( question, PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
AskMe( question, PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
question |
The question to ask Large Language Model. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: AskMe("What do you think about Large language models?") ## End(Not run)
## Not run: AskMe("What do you think about Large language models?") ## End(Not run)
Create {testthat}
test cases for the code.
buildUnitTests( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
buildUnitTests( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code for which to create unit tests by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: buildUnitTests("squared_numbers <- function(numbers) {\n numbers ^ 2\n}") ## End(Not run)
## Not run: buildUnitTests("squared_numbers <- function(numbers) {\n numbers ^ 2\n}") ## End(Not run)
Large Language Model: Clarify Code
clarifyCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
clarifyCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be explained by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: clarifyCode("z <- function(x) scale(x)^2") ## End(Not run)
## Not run: clarifyCode("z <- function(x) scale(x)^2") ## End(Not run)
Large Language Model: Find Issues in Code
debugCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
debugCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be analyzed by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: debugCode("z <- function(x) scale(x)2") ## End(Not run)
## Not run: debugCode("z <- function(x) scale(x)2") ## End(Not run)
Large Language Model: Code Documentation (roxygen2 style)
documentCode( code = clipr::read_clip(allow_non_interactive = TRUE), inLineDocumentation = "roxygen2", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
documentCode( code = clipr::read_clip(allow_non_interactive = TRUE), inLineDocumentation = "roxygen2", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be documented by Large Language Model. If not provided, it will use what's copied on the clipboard. |
inLineDocumentation |
Formatting style of In-Line Documentation. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: documentCode("z <- function(x) scale(x)^2") ## End(Not run)
## Not run: documentCode("z <- function(x) scale(x)^2") ## End(Not run)
Run a Large Language Model as RStudio add-in
execAddin(FUN)
execAddin(FUN)
FUN |
The function to be executed. |
Opens an interactive chat session with Large Language Model
execAddin_AskMe()
execAddin_AskMe()
Large Language Model: Finish code
finishCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
finishCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be completed by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: finishCode("# A function to square each element of a vector\nsquare_each <- function(") ## End(Not run)
## Not run: finishCode("# A function to square each element of a vector\nsquare_each <- function(") ## End(Not run)
Large Language Model: Create a Function or Variable Name
namingGenie( code = clipr::read_clip(allow_non_interactive = TRUE), namingConvention = "camelCase", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
namingGenie( code = clipr::read_clip(allow_non_interactive = TRUE), namingConvention = "camelCase", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code for which to give a variable name to its result. If not provided, it will use what's copied on the clipboard. |
namingConvention |
Naming convention. Default is camelCase. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: namingGenie("sapply(1:10, function(i) i ** 2)") ## End(Not run)
## Not run: namingGenie("sapply(1:10, function(i) i ** 2)") ## End(Not run)
Large Language Model: Optimize Code
optimizeCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
optimizeCode( code = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
The code to be optimized by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: optimizeCode("z <- function(x) scale(x)^2") ## End(Not run)
## Not run: optimizeCode("z <- function(x) scale(x)^2") ## End(Not run)
Takes the raw response from the Perplexity API and extracts the text content from it.
responseParser(raw)
responseParser(raw)
raw |
The raw object returned by the Perplexity API. |
Returns a character vector containing the text content of the response.
responseReturn
responseReturn(raw)
responseReturn(raw)
raw |
the chatresponse to return |
A character value with the response generated by Large Language Model.
Large Language Model: Rewrite Text
rewriteText( text = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
rewriteText( text = clipr::read_clip(allow_non_interactive = TRUE), PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
text |
The text to be rewritten by Large Language Model. If not provided, it will use what's copied on the clipboard. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: rewriteText("Dear Recipient, I hope this message finds you well.") ## End(Not run)
## Not run: rewriteText("Dear Recipient, I hope this message finds you well.") ## End(Not run)
This function takes a snippet of code and translates it from one programming language to another using Perplexity API. The default behavior is to read the code from the clipboard and translate from R to Python.
translateCode( code = clipr::read_clip(allow_non_interactive = TRUE), from = "R", to = "Python", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
translateCode( code = clipr::read_clip(allow_non_interactive = TRUE), from = "R", to = "Python", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant with extensive programming skills.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
code |
A string containing the code to be translated. If not provided, the function will attempt to read from the clipboard. |
from |
The language of the input code. Default is "R". |
to |
The target language for translation. Default is "Python". |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant with extensive knowledge of R programming." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A string containing the translated code.
## Not run: translateCode("your R code here", from = "R", to = "Python") ## End(Not run)
## Not run: translateCode("your R code here", from = "R", to = "Python") ## End(Not run)
Large Language Model: Translate Text
translateText( text = clipr::read_clip(allow_non_interactive = TRUE), toLanguage = "German", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
translateText( text = clipr::read_clip(allow_non_interactive = TRUE), toLanguage = "German", PERPLEXITY_API_KEY = Sys.getenv("PERPLEXITY_API_KEY"), modelSelection = c("mistral-7b-instruct", "mixtral-8x7b-instruct", "codellama-70b-instruct", "sonar-small-chat", "sonar-small-online", "sonar-medium-chat", "sonar-medium-online"), systemRole = "You are a helpful assistant.", maxTokens = 265, temperature = 1, top_p = NULL, top_k = 100, presence_penalty = 0, frequency_penalty = NULL, proxy = NULL )
text |
The text to be translated by Large Language Model. If not provided, it will use what's copied on the clipboard. |
toLanguage |
The language to be translated into. |
PERPLEXITY_API_KEY |
PERPLEXITY API key. |
modelSelection |
model choice. Default is mistral-7b-instruct. |
systemRole |
Role for model. Default is: "You are a helpful assistant." |
maxTokens |
The maximum integer of completion tokens returned by API. |
temperature |
The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. Set either temperature or top_p. |
top_p |
Nucleus sampling threshold, valued between 0 and 1 inclusive. |
top_k |
The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
presence_penalty |
A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
frequency_penalty |
A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. |
proxy |
Default value is NULL. |
A character value with the response generated by Large Language Model.
## Not run: translateText("Dear Recipient, I hope this message finds you well.") ## End(Not run)
## Not run: translateText("Dear Recipient, I hope this message finds you well.") ## End(Not run)