DiJiang: Efficient Large Language Models through Compact Kernelization
【DL輪読会】DiJiang: Efficient Large Language Models through Compact Kernelization by @Dee more
【DL輪読会】DiJiang: Efficient Large Language Models through Compact Kernelization by @Dee more
【DL輪読会】Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced more
【DL輪読会】In-Context Unlearning: Language Models as Few Shot Unlearners by @DeepLearning more
【DL輪読会】COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability (ICML2024 more
<script async class=”docswell-embed” src=”https://bcdn.docswell. more
<script async class=”docswell-embed” src=”https://bcdn.docswell. more
Data Level Lottery Ticket Hypothesis by @DeepLearning2023
“C-NERF: Representing Scene Changes as Directional Consistency Difference-based NeRF” more
Neural Redshift: Random Networks are not Random Functions by @DeepLearning2023