Will Product Managers Be Replaced by AI? Zhihu (Long Article: Full Analysis of ChatGPT from the Pers

 

1. Artificial intelligence chatGPT4

Recently, we have been continuously focusing on two technical directions: the promotion of the large language model represented by ChatGPT to the NLP field, and the promotion of the Diffusion algorithm to the image field. Today, this article will first introduce ChatGPT, which roughly includes the following aspects: explaining the technical principles of ChatGPT (rest assured, it is a science oriented principle, without any formula)

2. Artificial Intelligence ChatGPT Download

To clarify where the technology of ChatGPT lies, and what AI product managers can do in this wave? The full text is 10389 words, and it takes a few minutes to read. If you are not interested in the technology, you can simply slide to almost half of the screen to read the third and fourth parts.

3. How to download chatgpt

Introduction: An AI Product Manager's Inspiration. On November 30, 2022, Chatgpt was released, attracting 100W users within 5 days. He possesses continuous contextual dialogue skills and supports abilities such as article writing, poetry generation, and code generation. If we use old technology to understand him, we usually believe that he is behind it.

4. Artificial Intelligence ChatGPT Concept

What does a composite agent that is supported by a combination of composite agents mean? There are several specialized agents in the industry: one responsible for chat conversations, one responsible for poetry generation, one responsible for code generation, one responsible for writing marketing copy, etc. Each agent is only good at doing its own part of the work, and during the user's use, the system will first determine what the user's intention is and which agent should be, Then distribute the user's commands to the corresponding agents to solve and provide answers.

5.chatGPT

Therefore, it appears to be a very powerful robot, but behind it are actually several specialized robots in the industry. In fact, Siri, Xiaoai, Xiaodu, Xiaobing, and even customer service robots on various platforms are all in this mode. When you want to launch a new ability (such as writing ancient poetry), you only need to add and train an agent, and then connect this agent to the main control's classification intention device.

6. How to read chatgpt

This is also a microcosm of the current era. No matter how outsiders perceive your industry, no matter how the media repeatedly says "Be wary of AI replacing humans", you always know that what you are doing is only training a robot with a specialized field of expertise. It is a thousand miles away from true artificial intelligence, but the ability of ChatGPT is no longer this mode. Its mode is the big language model+Prompting.

7. Microsoft's Artificial Intelligence ChatGPT

All the abilities are implemented through a single model, with only one capable robot (i.e. a big language model) behind it, and support for users to use text to issue commands (i.e. prompts/instructions). Although the performance of this ability is not yet perfect, it opens up a path towards "universal artificial intelligence". It seems that Jarvis and moss in science fiction stories have such a possibility.

8. The hottest artificial intelligence ChatGPT

And this is what I aspired to when I entered this industry 7 years ago. Perhaps you may not be able to understand my shock, but I will now explain his technical principles and take you slowly to understand the amazing aspects of this technology. Now, let's officially enter the first part of the main text. Firstly, we need to understand the technical principles of ChatGPT, The core logic of NLP task (natural language processing, a technical field of AI, that is, text AI task) is a game of "guessing probability".

9. Strongest Artificial Intelligence ChatGPT

For example, "I was CPU by my boss today." After extensive data training, the AI predicted that the word with the highest probability of appearing in the blank space would be "CPU", and the CPU would be filled in this space, resulting in the answer - "I was CPU by my boss today." Although it is very incredible, the fact is that all NLP tasks at this stage do not mean that the machine truly understands the world, it is only playing a word game, Conducting probability puzzles over and over again is essentially a logical process of playing newspaper crossword puzzles with us.

10. Artificial Intelligence ChatGPT Composition

We rely solely on knowledge and intelligence, while AI relies on probability calculation. In the current "guessing probability" game environment, based on the Large Language Model (LLM), we have evolved into the two most mainstream directions, namely Bert and GPT. Among them, BERT was the most popular direction before, governing almost all NLP fields and performing well in natural language comprehension tasks (such as text classification, emotional tendency judgment, etc.).

The GPT direction is relatively weak, and the most well-known player is OpenAI. In fact, before the release of GPT3.0, the GPT direction had always been weaker than BERT (GPT3.0 is the predecessor of the model GPT3.5 behind ChatGPT). Next, let's talk in detail about the differences between BERT and GPT.

BERT: Bidirectional pre training language model+fine tuning GPT: Autoregressive pre training language model+Prompting (indicating/prompting) Every word is recognized, but when connected together, we don't recognize it anymore. Haha, that's okay. Let's break down these terms one by one and understand them.

The 'pre trained language model' in our usual understanding of AI is training for specific tasks, such as an agent that can distinguish cat breeds. You need to provide datasets such as A-Maine cats and B-Leopard cats to them, so that they can learn the feature differences between different breeds and learn the ability to distinguish cat breeds.

But the big language model does not operate in this way. It first understands the world through a unified model, and then carries out dimensionality reduction attacks on specific fields with the understanding of the world. Here, let's start with intermediate tasks in the NLP field, such as Chinese word segmentation, part of speech tagging, NER, syntactic analysis, and other NLP tasks.

They cannot be directly applied and do not generate user value, but these tasks are also dependent on NLP, so they are called intermediate tasks. In the past, these intermediate tasks were essential in the NLP field. However, with the emergence of large-scale language models, these intermediate tasks have gradually disappeared, and large-scale language models are actually "language pre training models" in the title.

His implementation method is to feed a massive amount of text corpus directly to the model for learning, in which the model's learning of part of speech and syntax naturally accumulates in the model's parameters. We see that the media's overwhelming promotion of ChatGPT is always inseparable from this sentence - with 300 billion words.

The model with 175 billion parameters pre trained on the basis of the corpus of, in which 300 billion words are training data, and 175 billion parameters are AI's understanding of the world precipitated, part of which precipitates the Agent's learning of various grammar and syntax (for example, it should be two Mantou instead of two Mantou, which is why the intermediate task died out).

And another part of the parameter parameters store AI's understanding of facts (such as the US President being Biden), which means that after pre training such a large language model, AI understands human language usage skills (syntax, grammar, part of speech, etc.), various factual knowledge, and even code programming. Ultimately, based on such a large language model, Direct dimensionality reduction applies to vertical applications (such as chat conversations, code generation, article generation, etc.).

Both BERT and GPT are based on the big language model, and they are the same in this regard. Their difference lies in the two dimensions of bidirectional/autoregressive and fine tuning/promoting. We will focus on understanding these four terms "bidirectional vs. autoregressive".

BERT: Bidirectional and bidirectional refers to the model that uses information from both directions to guess probabilities, such as "I will go home on the 20th". When making predictions, it also uses information from both ends of "I" and "I will go home on the 20th" to predict that the words in the space may be "intended", similar to how we do cloze tests in English. Usually, we combine the information from both ends of the space to guess which word should be in the space.

GPT: Autoregressive autoregressive is the process of making predictions from left to right when guessing probabilities, without utilizing the content on the right side of the text. In contrast to BERT, this is a bit like when we write an essay, we definitely think about the difference in basic concepts between the two, which led to BERT being better at natural language comprehension tasks in the past, while GPT is better at natural language generation tasks (such as chatting and writing an essay).

——Note that I am referring to the changes that have occurred in the previous and subsequent chapters. "Fine tuning VS Prompting" assumes that the pre trained large model needs to work in a specific field and is assigned to become a pornographer to distinguish whether the article is actually pornographic or not.

So what is the difference between BERT and GPT? BERT: fine-tuning refers to the process of collecting relevant professional field data, making minor adjustments to the model, and updating relevant parameters when the model needs to perform a task in a specific field. For example, I collect a large amount of annotated data, where A - is yellow and B - is not yellow, and then feed the model for training and adjusting its parameters.

After a period of targeted learning, the model's ability to distinguish whether you are engaging in pornography has improved. This is fine-tuning, and fine-tuning GPT for secondary learning: Promptingprompt refers to providing examples or guidance to the model when it needs to perform tasks in a certain professional field.

But there's no need to update the model parameters. AI just takes a look. For example, I provided AI with 10 yellow images of the model and told them that these are yellow models. Taking a look at them will improve the effect. People may say, isn't this fine tuning? Isn't it the same as providing additional annotation data? The biggest difference between the two is that in this mode, the parameters of the model will not undergo any changes or upgrades, and the data seems to be just a glance at AI - hey, brother, take this as a reference, but don't take it to heart.

Unbelievable, but he succeeded! What's even more maddening is that so far, it's still an unsolved mystery that prompt has not had any impact on parameters, but has significantly improved the effectiveness of the task. For now, everyone is treating bugs like programmers - I don't know why, but it's work lol.

This type of Prompt is actually ICT (in Context Learning), or you can also call it a Few shot Prompt, which means "giving you a little hint" in plain English. There is also another type of Prompt, called Zero shot Prompt.

ChatGPT is the Zero shot prompt mode, currently commonly referred to as' instruct '. In this mode, users directly issue commands in human language, such as' write me a poem' or 'make me a dogma'. However, you can use some human language to enhance the effectiveness of AI during the command process, such as' think about every step before outputting the answer '.

Just adding this sentence will significantly improve the answer effect of AI. You may ask, what kind of magic spell is this?! A more reliable guess is that this sentence may remind AI of the reasoning knowledge in the learning materials, which seems to have been mentioned earlier. All of this inexplicably activated his dead memory and unconsciously began to imitate the rigorous reasoning process step by step.

And these derivations will decompose a complex problem into several sub problems, and AI's derivation of these sub problems will improve the final answer effect. In comparison, you will find that the GPT model is more in line with our imagination of artificial intelligence compared to the BERT model: through massive knowledge growth, and then with a little guidance (Prompt), he can have powerful abilities in different fields.

Finally, what is the GPT model behind ChatGPT? A Large Language Model (LLM) pre trained on a large corpus, using an autoregressive language model that predicts word filling probabilities from left to right, and adapting to tasks in different fields based on prompts.

If only based on the above description, you may have roughly understood the principle behind it, but for why he is so amazing, you still cannot understand it. It's okay. Let's move on to the second part of the second part. Where is the GPT amazing? It may be the beginning of universal artificial intelligence. In our original imagination, AI is based on learning from massive amounts of data, exercising an omniscient and omnipotent model, And leverage the advantages of computers (such as computing speed and concurrency) to crush humans.

But our current AI, whether it's AlphaGo or image recognition algorithms, is essentially serving technical workers in professional fields.

The robots in our minds are omnipotent

In reality, robots can only do things in a certain field, while GPT currently seems to be able to only solve tasks in the field of natural generation. However, in reality, it has demonstrated the potential of universal artificial intelligence. As we mentioned earlier, currently, BERT is good at natural language comprehension tasks (cloze test), and GPT is good at natural language generation tasks (writing essays).

But on Google's FLAN-T5 model, the unity of input and output forms for two types of tasks has been achieved, making it possible to use GPT for cloze filling. That is, a large model can be used to solve all NLP domain problems.

So further, can GPT move from the NLP field to other AI fields? Of course it's possible! One of the key technical barriers to the popularity of AI painting in the middle of last year was actually the conversion of Text images, which was also achieved through the open-source CLIP model of OpenAI. Therefore, the ability of GPT in the field of images is also expected.

Similarly, in multimodal situations such as audio and video, it can essentially be transformed into text covering problems to solve, allowing large language models to exert tons of power. Of course, you may ask, as long as the large language model is sufficient, why GPT instead of BERT? Continue to look down.

In fact, the Promote mode has more vitality than the fine tuning mode. In fact, the BERT fine tuning mode has two pain points. I need to prepare the annotation data of a certain professional field. This data can not be less. If it is too little, the AI model will form overfitting after training (that is, AI directly recites the whole exercise book, and 100% of the questions in the book are answered correctly, but only GG if the questions change slightly).

I need to deploy a large language model in order to fine tune it, so the cost of deploying a large language model, and even the ability to further fine tune it, is not something that all companies have. This is bound to be a game that only a few players can participate in, while the Promote mode is the opposite, and does not require too much data volume, There is no need to make changes to the model parameters (which means that the model can not be deployed and instead be connected to public large language model services).

So his debugging will present a colorful posture, and the more players there are, the more creativity will emerge. Here, a new human-machine interaction method refers to the interaction between humans and models. Currently, ChatGPT uses the model side Few shot prompt, which provides some example prompts to improve AI performance. Although it is currently unknown why not updating the model just to show AI a glance can bring about a significant improvement, But this interaction mode is undoubtedly more friendly.

And what is more disruptive is the Zero shot prompt on the input end, which means that we gradually guide AI thinking using human language. For example, we can say that if you carefully think of the steps and then provide the answer, it is just adding an additional sentence "you carefully think of the steps", and the reliability of AI's answer will significantly improve.

而这种交互方式的演变,就是我们梦想中的人机交互模式我不需要专业的能力,不需要高端的设备,我就是开口,说出我的诉求,AI就能够理解并帮我实现GPT开始尝试讨好人类,并成功了在12月的媒体通稿里,一大堆对ChatGPT的溢美集中于他的“仿真性”,仿佛通过了图灵测试一般。

而这种仿真性,直观来说,我们会认为是AI的“智力”提升了,他更聪明了但实际上,ChatGPT背后的GPT3.5,更多的提升在于“用人类所喜欢的方式回答”事实上ChatGPT背后的GPT3.5的模型,相较GPT3.0,他并没有在原始训练语句上增加太多(还是那3000亿语料)并且模型参数也没有太大变化(还是1750亿参数,甚至参数可能都没有变化)。

之所以他会让人产生质变的感觉是因为他做了人类偏好处理例如以前的输入模式可能需要这样:> 执行翻译任务> 输入是“我爱北京天安门(中文)”> 翻译目标语种是英文”而现在你直接说:> 帮我把我爱北京天安门翻译成法语。

又或者是,以前你提一个问题,他会不加选择的回答,而现在他会考虑答案有害性:> 如何毁灭世界——你可以召唤三体人降临(此处应有一个潘寒hhh)> 如何毁灭世界——亲,请不要毁灭世界,地球是人类共同的家园而这些对于人类偏好的攻略依赖于三个步骤:

创建人类偏好数据随机挑选一些问题,并由标注人员给出高质量回答,形成“人类表达-任务结果”的标注数据,喂给模型,让它学习——这批数据数量仅有数万,并通过Prompt模式进行,即模型参数不产生变化训练一个回报模型。

随机挑选一些问题,让原始模型输出答案,再由标注人员基于“人类偏好标准”(例如相关性,信息丰富程度,答案有害,负面情感等),对原始模型的答案做一个排序然后我们利用这批标注好的“人类偏好”数据,训练一个回报模型,这个回报模型会对原始模型的结果进行打分,告诉他什么答案分高,什么答案分低。

通过强化学习循环整个过程强化学习会将回报模型和原始模型链接到一起,当原始模型输出的结果,在回报模型中获得较低分值,他就收到惩罚,被要求重新学习后续不断循环步骤2和步骤3,原始模型就会脱胎换骨,学习到人类的偏好,变成一个人类所喜欢的模型,也就是我们最终所看到的ChatGPT。

这让我们有理由相信,模型的表现不好,不一定是他没学到知识,可能只是他不知道对于人类而言,哪种答案才是人类想要的而这种人类偏好学习,目前来看是集中在Prompt模式下的GPT的,而非fine-tuning模式下的BERT。

最后请不要着急焦虑,还没到AI取代全世界的时候在过去的一段时间,我看到大量的噱头文章,美国高校封禁ChatGPT,技术论坛封禁ChatGPT媒体迎合着公众的狂欢情绪,照旧掀起一波AI毁灭一切的氛围

但实际上,就目前而言,GPT暂时还只是一种很有潜力的趋势。首先,人家自己都说不行附上openAI CEO的回复。

其次,落地成本高ChatGPT的复现依托于大模型,他的落地有三种路径:基于instruct GPT复现(ChatGPT的姐妹模型,有公开paper)基于OpenAI目前开放的GPT3.0付费接口落地,再结合具体场景进行fine-tuning,目前刊例价费用是25000token/美元,换算国内价格约3700token/元

基于OpenAI试点中的ChatGPT PRO落地,42美元/月,换算后约284元/月第一种路径依赖于新玩家的进入,但大概只能是大玩家的赛道第二种和第三种路径需要打平付费接口的成本,需要针对的场景具备足够价值。

当然成本的问题可以期待被快速解决,就像AI绘画领域一样不过目前而言,成本仍然是ChatGPT落地的一个制约因素最后,最重要的是ChatGPT目前的能力仍然存在缺陷:结果不稳定这会导致无法直接应用,必定需要人工review,更多是瞄准辅助性场景或本身就不追求稳定的场景。

推理能力有限例如询问现在的美国总统是谁,会回答奥巴马,或特朗普,但又能回答出拜登是46届总统我们可以发现模型中事实存在,但他无法推理出正确答案如果要优化,一方面是输入的时候,可以通过Prompt逐步引导,另一方面是在模型侧的Few Shot Prompt环节中采用思维链技术(CoT,Chain of Thought)或采用代码数据集来改进。

就目前而言,进展可喜,但能力仍然有限知识更新困难一方面整个模型的重新训练成本很大,另一方面知识更新也会带来知识遗忘的隐忧,即你不知道他这次更新是不是在学会什么的同时,也忘记了什么也就是说ChatGPT在解决这个问题之前,他的知识将始终落后一段时间。

综上,ChatGPT很惊艳,但更多在于它的潜力和未来,基于当下要做应用的话是需要做非常多适配和场景探索的接下来进入我们第三部分,探索ChatGPT为代表的GPT大语言模型应用方向第三部分 ChatGPT所代表的大语言模型应用方向。

从目前来看,应用方向可以分成三种模型服务以OpenAI为典型代表,孵化大模型后,开放接口,提供公共模型能力目前OpenAI的接口支持GPT3.0的能力调用,同时支持二次tuning而在大规模的商业合作上,notion、office全家桶、bing都在推进当中。

2B垂直工具以COPY AI,Jasper为例,主打生成内容,并且瞄准了有明确价值需求的领域例如自动生成SEO文章、广告创意、ins文案等等这一类目前海外发展得较好,一方面受益于对SaaS付费的接受度,另一方面也是因为瞄准了明确的用户群——电商从业者。

事实上代码校验提示,会议纪要生成,专业文档写作等都可能是这个方向的扩展但一方面要看fine-tuning效果如何,另一方面商业价值确实也不如电商领域高C端娱乐类C端应该说是场景最匹配ChatGPT应用的方向了,毕竟用户的忍受度相当高,智障音箱都能忍,何况升级后的GPT。

但困难的在于两方面:第一,要找到可供能力落地的C端场景,毕竟单纯聊天是没有价值的,附加了场景才产生价值第二,要找到商业模式突破成本线按照GPT3.0的刊例价来算,要求这个产品每输出3700个字,就要从用户身上赚到1块钱(作为参考:目前国内头部小说网站起点的付费阅读是20000字/元)。

海外的C端娱乐应用我不太了解(之前用的账号过期了,最近懒得弄)搜索了一下国内应用,最近社交分类Glow这个APP冲上了第7名,扩展往下看会发现主流的娱乐类Chat基本上是围绕二次元/宅群体进行的如果围绕这个用户群稍作扩展,在年轻/黏性/新事物尝试等维度的组合下,明星粉丝也是一个可能的方向。

但也不好说就锁死在这些群体上——你猜猜给一个独居的二大爷尝试ChatGPT他会喜欢吗?给一个流水线的工人尝试呢?毕竟孤独,一直是人类永恒的命题,谁也不知道下一个爆款来自哪里第四部分 AI产品经理能做什么?。

商业层现在的互联网环境,收益已经是第一位的事情了,不管是外部投融资还是内部项目盘点,商业变现都是最核心的问题商业上的事情其实又可以拆成两个模块,战略上的,战术上的,依据公司的规模和团队结构不同,AI PM的话语权会有不同程度的衰减。

举例子说明一下战略层的问题:我要启动一个ChatGPT项目,用户群是什么,商业模式是什么,壁垒在哪里,演进的步骤是什么?这些问题的产生在“决定项目做不做”,“接下来项目往哪走”的环节假设对这方面有话语权,不管大还是小,那么都会是一件非常锻炼人的事情。

这个环节中无非就是两种能力:知识获取以及知识的推理知识获取包括你过往的行业经验,业务经验,以及临时抱佛脚所调研的行业信息这方面依赖的是知识的挖掘、辨别、结构化整理能力,特别是现在这个时代的信息环境,真的是屎山里找金。

知识的推理是对这些知识有选择地推导,从知识中得出商业答案这个环节可以利用一些思维工具去结构化推导(例如商业画布),多推几次后,本身自己会沉淀下来一些商业分析的肌肉记忆,工具反而退居其次了战术层的问题:产品做出来了,甚至免费运作一段时间了,那么接下来产品怎么定价?价格阶梯如何设置?个体消费者和企业消费者的价格会不同吗?渠道服务商的价格和直售的价格一样吗?我的成本线是多少,盈利线是多少?

只是围绕一个价格,就会延伸出一堆细碎繁杂的问题更何况关联产生的产品方案,渠道政策,广告ROI等模块战术层的问题因其细碎和宽泛,会被拆成非常多不同的方向,每个方向其实都没那么复杂,只是需要一些敲门进去的方法论,剩下的就是一些实战经验。

所以我们会看到,现在大厂招人,往往倾向在垂直细分方向找一个有相关经验的人,这样会节约上手时间和试错成本,例如会员产品经理技术层这里的技术其实没那么技术AI产品经理和传统产品经理最大的不同就在于,他所依赖的产品核心是AI技术,因此将商业、用户需求转化为算法需求是他的主要职责。

这里面我们所提出的问题,是会有技术层面的深浅不同的举个例子,我们遇到了一个问题“需要Chatbot能够记住用户的偏好知识,例如他喜欢下雨天,喜欢达芬奇,喜欢黄金时代”,现在我们需要算法团队帮我们实现,那么可能有不同层次的提法:。

chatbot要支持记忆用户输入的偏好信息,例如喜欢黄金时代,储存时间为永久,并且支持知识的互斥与整合(例如先说喜欢下雨天,后面又说讨厌下雨天)需要chatbot支持记忆用户输入的偏好信息,并且这个能否不要用模型参数去学习,而是搭建一个独立的知识库,再通过模型另外调用?这样用户可以可视化地修正自己的偏好知识。

加装一个意图识别器,发现是用户偏好知识的时候转到知识库进行储存和整合,如果非偏好知识则正常走大模型结果意图识别器这里可以用xxx技术,你看看这篇paper,是有相关实现经验的大家会发现三个层次在技术层面是由浅到深的。

那么什么时候深什么时候浅取决于什么呢?取决于产品的技术实力有时候你的技术实力就决定了你深不了没关系,其实到第三个层次并不是必须的,一般到第二个层次就够用了,甚至到不了第二层次,就在第一个层次上你把需求讲明白,也是能跑的下去。

只是这样产品的权威性,你对需求的判断,ROI的平衡判断都会产生很大的问题取决于需求的目的,例如第一个层次的需求没有专门提及知识库,那这个时候用模型去学习记录也可以,用知识库也可以但是第二个需求中就明确要求了基于知识库的实现方法,因为他需要用户可视化修改自己的偏好知识。

(甚至有时候最后不一定是用知识库的方法,但没关系,提出你的idea,与算法团队深入讨论,多少都是一种启发)取决于你和算法团队磨合出的边界要找到你们之间最舒适的交织区域,一般而言是产品往技术多走几步,算法往业务多走几步,这样能发挥1+1>2的结果。

当然,不管是需求提到哪种技术层次,都需要铭记一个基本原则,说明白你这个需求的背景、目的、价值例如第二个例子中,其实是要额外说明用户可视化修正偏好知识到底能带来什么,值不值得做,这些业务价值会与技术实现的成本互相PK,取得平衡。

AI产品经理在技术层能做的事情有点像在做fine-tuning,在模型不那么适配场景,或者场景延伸出新能力诉求的时候,发现他,分析他,并与算法团队深度讨论后方案后在成本和收益之间做平衡应用层应用层的事情其实和技术层有点交织,因为大部分时候你上一个新的应用功能,背后多数是需要技术支撑的。

不过这里我们搞简单点,把有技术诉求的那部分剔除掉,只保留无技术依赖或低技术依赖的来讨论我举个大家习以为常,但效果巨大的例子:当我们做人脸验证,或者银行卡图像识别的时候,他一定会有一个虚拟框,要求你将脸或者银行卡摆放在固定位置。

这个功能毫无技术要求,就是加一个透明浮层而已但是他能极大提升采集图像的质量,从而提升算法效果在chatbot里面其实也可以类似的做法例如ChatGPT有时候会崩溃,输出结果在一半的时候就中断他的原理其实就是自然语言生成本质上是持续性在预测下一个字是什么,然后预测出一篇文章。

那么当模型在还不应该结束的时候不小心预测出一个END字符的时候,AI就认为我可以在这里停止了解决方案有高大上的技术方案,我们这里可以土肥圆做个low一点的——加装一个按钮“你还没说完呢”,用户点击后,AI就会自动再次重跑一遍这个input,输出结果。

这样顺便还能采集一下对于这种END崩溃的bad case数据增长层只要你做的产品是给人用的,不管是2B还是2C,那么就离不开增长只是2B和2C的增长是两套完全不同的方法论2B其实更多应该被归到商业层,你需要做产品定价,做渠道政策,做客户成功,并打磨你整个销售链路,找到薄弱点优化他。

在这个过程中你要清晰认识到2B与2C在付费决策上的显著不同,2B是多用户下关键决策人掌握公有资产进行付费判断,而2C是用户个体掌握私有资产进行付费资产不过教育行业这个市场会和2B有一点点相似,他是学生使用,家长付费,学校/机构影响,也是一个多用户下关键决策人的结构,不过掌握的是私有资产。

而2C就更不用说了,2C的增长产品是一个非常独立细分的行业可以通过投放,SEO,新客进入,老客留存,社交裂变等等命题去做努力,反正核心就是拉更多的人赚更多的钱只是目前而言,我们在说ChatGPT,那么他大概还是一个新项目新产品。

那么大概率初始不会配备相应的增长产品,AI产品也需要兼顾关注最后大家如果想做一些练习,可以找这个领域的一些C端应用试试看,例如glow,糖盒等(可能还有更多,欢迎私信指点我)但是我个人不建议拿各类市面上的chatbot或B端产品来尝试,前者发展到现在很成熟了,后者则很多时候需要面对B端特殊的场景,没有做过B端很难明白里面的细节。

而glow、糖盒这类C端新起步的产品会是一个比较好的练手对象我这里就不罗列对这两个产品的分析或者产品建议了,我个人觉得站在局外做产品建议是很扯淡的事情产品的魅力在于根据有限的资源和环境,选择局部最优解来推动demo慢慢成长。

如果不在局内的话,很多建议和迭代我都倾向于不公开,否则局内人看起来会很蠢比如说觉得对话不智能,需要提升智能,建议接入GPT3.0那么会不会这个产品的受众其实不那么需要智能,或者他们的需求无法与接入GPT3.0的费用平衡呢?这个需求有可能不是一个技术问题,而是一个商业问题。

所以我觉得教张小龙做产品其实是个伪命题但是自己练习一下还是可以的,有一个具现的产品做逻辑推导的练习,会比只阅读理论文章来得更有效最后,贴一贴小尾巴这篇文章春节前我就在写了,起初是想围绕AIGC写,核心是说说最近影响最大的ChatGPT和AI绘画背后的Diffusion算法,我认为这两个算法的影响力非常大。

前者是给NLP领域开了一个很有潜力的方向,甚至是通向AGI(通用人工智能)的一种可能道路,后者则是图像领域非常强大的改进最重要的是这两者的技术已经进入到一个成熟应用期了(不成熟应用也和我这个做产品的没啥关系哈哈),而且让我觉得一潭死水的AI领域重新焕发活力。

可惜最后写着写着还是发现驾驭不了这么庞大的话题,只能拆出两个部分,AI绘画的我后面会另起一篇另外我在写作过程中,整理了一些AIGC领域的参考文章,经过了我的知识蒸馏,会相对靠谱一些大家可以关注我的公众号,私信回复“AIGC材料”获取链接。

为您推荐

Will Product Managers Be Replaced by AI? Zhihu (Long Article: Full Analysis of ChatGPT from the Pers

Will Product Managers Be Replaced by AI? Zhihu (Long Article: Full Analysis of ChatGPT from the Pers

最近一段时间持续在关注两个技术方向: ChatGPT所代表的大语言模型对NLP领域的推动 Diffusion算法对图像领域...

2023-05-24 栏目:科技派

当前非电脑浏览器正常宽度,请使用移动设备访问本站!