LLM Prompting & Learning Guide

LLM Prompting & Learning Guide

- 3 mins

LLM Prompting & Learning Guide

I spent today downloading and processing all of my chats with LLMs, with the purpose of compressing my previous prompts and using them to get an LLM prompt-review! I got some useful tips which I think might prove helpful for other researchers, then I’m going ahead and putting up this practical companion for improving efficiency, learning, and collaboration with language models.


Strengths in LLM Interactions

1. Precision in Technical Queries

2. Depth of Exploration

3. Iterative Refinement

4. Multimodal Use


⚙Areas for Improvement

1. Add Context to Abstract Queries

2. Avoid Over-Fragmentation

3. Clarify Ownership of Requests

4. Ask for Error Traps

5. Balance Theory with Practice


Interaction Style Reflection

Suggested Style Enhancers:

  1. “Explain like I’m a beginner.”
  2. “Now optimize for brevity.”

Optimizing LLM Outputs

1. Pre-constrain Format

“Give a 3-sentence summary, then 3 bullet points of caveats.”

2. Force Prioritization

“Rank these by memory usage for n=1e6.”

3. Meta-Awareness

“What key question did I forget to ask about [topic]?”


The BQH Prompt Framework


Learning and Development Tips

For Programming:

For Theoretical Growth:

Research Practices:


Using LLMs as a Learning Partner


Final Motto

“Optimize, but leave room for the serendipitous.”

comments powered by Disqus