
Thryve Chat
Personalized holistic wellness guidance

FriendnPal
AI-powered mental health app in Africa.

Strangify
Elevate emotional well-being with Strangify - a safe and supportive platform for venting, connecting, and healing.

Rekka: Your AI Accountability Partner
AI reminder app for neurodiverse users

Space of mind
Affordable online therapy group helping individuals overcome PTSD symptoms and heal.

Gobi
Personal well-being assistant with science-backed practices

TheraMe
AI-Powered Mental Wellness Companion

HyggeX
Personalized learning platform using AI to enhance student experiences.

PsyScribe - AI Therapist
PsyScribe is an AI therapist that improves mental health in cost-effective, anonymous, and secure ways.

MYND
AI mental health app offering personalized meditation and support.

Avocado AI Therapist
Your AI THERAPIST for mental well-being!

Shoorah Wellbeing
Comprehensive wellbeing app for mental health support

SoulCanvas
AI emotional wellness companion transforming narratives into art.

Deepwander
Self-awareness enhancement

iprevail.com
Improving mental health through wellness programs

Comigo
Personalized ADHD support and productivity enhancement.

Free AI Therapy
Free, 24/7 AI Therapist service

Romantic AI
Virtual companionship and conversations with your AI-powered girlfriend.

Replika
Replika is an AI chatbot that provides emotional support and mimics users' texting styles.

MMedC
多语言医学语料库 MMedC。该语料库涵盖六种主要语言、约 255 亿标记,并用于通用大语言模型的自回归训练和领域适配。同时,研究者开发了具有推理能力的多语言医学多选问答基准MMedBench,以评估多语言医学模型的性能。在此基础上,通过在 MMedC 上训练多个开源模型,研究者提出了多语言医学大模型MMed-Llama 3。该模型在MMedBench 和英语基准测试中表现出色,在推理能力和问答准确率方面均达到领先水平。

OpenMEDLab
OpenMEDLab致力于提供一个集合多模态医学基础模型的创新解决方案。未来,随着平台的不断发展,我们期待看到这些技术更新在OpenMEDLab上实现和应用,进一步推动跨模态、跨领域的医学AI创新。通过在不同医学任务中的灵活应用,OpenMEDLab不仅为基础模型的适配和微调提供了支持,也为解决医学中的长尾问题、提升模型效率和减少训练成本提供了创新途径。

MMedLM
语料库数据集。为了实现多语言医学专用适配,我们构建了一个新的多语言医学语料库(MMedC),其中包含约 255 亿个标记,涵盖 6 种主要语言,可用于对现有的通用 LLM 进行自回归训练。 基准。为了监测医学领域多语言法学硕士 (LLM) 的发展,我们提出了一个新的、具有合理性的多语言医学多项选择题答疑基准,称为 MMedBench。 模型评估。我们在基准测试中评估了许多流行的 LLM,以及在 MMedC 上进一步进行自回归训练的模型。最终,我们最终的模型(称为 MMedLM 2)仅具有 70 亿个参数,与所有其他开源模型相比,其性能更为卓越,甚至可以与 MMedBench 上的 GPT-4 相媲美。
只显示前20页数据